You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is because the show() method in the /controller/analysis_controller, calls the search() method in the /model/analysis which accesses the mongoDB and creates Mongoid::Criteria with the .where() from the Mongoid gem.
That query gets lazily executed later in the simulations.each part of the analysis view and can run out of memory if the .dataSize() of the simulations in the DB is larger than the MONGO_MEM ENV which changes the internalQueryMaxAddToSetBytes parameter in the various docker-compose.yml files which are used for deployments.
Mongo says to allow allowDiskUse:true, similar to what we added here in the Datapoint.collection.aggregate() call. This cannot be done in the .where() queries, however, since that is not an allowable argument and changing the .where to the .aggregate() queries would be a real pain and require significant changes.
Starting in Mongo 6.0, allowDiskUse:true is now the default for queries. While this may impact performance for large queries, this will keep the analysis GUI page from crashing.
So, bump Mongo from 4.0.4 to 6.0.7 which will require a bump for Mongoid from 7.2 to 7.4.3 accoring to compatibility chart.
Note: This issue appeared in a large analysis where the --cli_debug and --cli_verbose options were left on, which put 7mb of OpenStudio CLI output into the sdp_log_file for each datapoint, which gets put in the MongoDB. After 50 datapoints, this was over the query limit for the DB and crashed the webpage. The analysis still completed, but the user experience was not great. The simulate datapoint log, on debug settings, contains the 'registry' information for each Measure. In this case, there was a discrete variable with 15,000 values in it and this information was put in the log several times, once for each measure, and that was the reason the datapoint entry was so large. We should look at ways to remove the simulate datapoint log from the database and provide it to the user a different way, but that can be a new issue.
The text was updated successfully, but these errors were encountered:
The mongo memory limit has appeared again.
This is because the show() method in the /controller/analysis_controller, calls the search() method in the /model/analysis which accesses the mongoDB and creates Mongoid::Criteria with the .where() from the Mongoid gem.
That query gets lazily executed later in the simulations.each part of the analysis view and can run out of memory if the .dataSize() of the simulations in the DB is larger than the MONGO_MEM ENV which changes the internalQueryMaxAddToSetBytes parameter in the various docker-compose.yml files which are used for deployments.
Mongo says to allow allowDiskUse:true, similar to what we added here in the Datapoint.collection.aggregate() call. This cannot be done in the .where() queries, however, since that is not an allowable argument and changing the .where to the .aggregate() queries would be a real pain and require significant changes.
Starting in Mongo 6.0, allowDiskUse:true is now the default for queries. While this may impact performance for large queries, this will keep the analysis GUI page from crashing.
So, bump Mongo from 4.0.4 to 6.0.7 which will require a bump for Mongoid from 7.2 to 7.4.3 accoring to compatibility chart.
Note: This issue appeared in a large analysis where the --cli_debug and --cli_verbose options were left on, which put 7mb of OpenStudio CLI output into the sdp_log_file for each datapoint, which gets put in the MongoDB. After 50 datapoints, this was over the query limit for the DB and crashed the webpage. The analysis still completed, but the user experience was not great. The simulate datapoint log, on debug settings, contains the 'registry' information for each Measure. In this case, there was a discrete variable with 15,000 values in it and this information was put in the log several times, once for each measure, and that was the reason the datapoint entry was so large. We should look at ways to remove the simulate datapoint log from the database and provide it to the user a different way, but that can be a new issue.
The text was updated successfully, but these errors were encountered: