Multisite analysis #10
-
Hi, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
@Fahad021 you are correct. We hope to increase the limit as we can. More details than you probably want: each POST to the job endpoint can use up to 3 CPUs and 4 GB RAM at its peak. Many of our users wish to run 1,000's of jobs per day, and so to allow equal access we have set the hourly limit to 200 jobs per hour. A typical job takes 45 seconds to a minute. So if five users submit 200 jobs per hour we have 1,000 jobs per hour, that is about 17 jobs per minute, consuming up to 51 cores and 68 GB of RAM. It is important to note that these numbers are ballpark averages. Some jobs are faster, and the most complex ones take 5 minutes or more. All jobs sit in a queue waiting on CPUs/workers to become available. So in some cases submitting 200 jobs in one hour does not mean that all those jobs will complete within an hour. |
Beta Was this translation helpful? Give feedback.
-
FYI one can check their rate-limit usage using the information on developer.nrel.gov |
Beta Was this translation helpful? Give feedback.
@Fahad021 you are correct. We hope to increase the limit as we can.
More details than you probably want: each POST to the job endpoint can use up to 3 CPUs and 4 GB RAM at its peak. Many of our users wish to run 1,000's of jobs per day, and so to allow equal access we have set the hourly limit to 200 jobs per hour. A typical job takes 45 seconds to a minute. So if five users submit 200 jobs per hour we have 1,000 jobs per hour, that is about 17 jobs per minute, consuming up to 51 cores and 68 GB of RAM.
It is important to note that these numbers are ballpark averages. Some jobs are faster, and the most complex ones take 5 minutes or more. All jobs sit in a queue waiting on CPUs/workers to…