-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue for known performance challenges #800
Comments
Part of the performance problem is that a lot of the parameter data is used in the form of Rather than adding upper bounds, these are instead added as constraints like: @constraint(model, [t=1:T], generation[t] <= available[t]) There are also others like @constraint(model, [t=1:T], generation[t] <= constant * available[t]) and @constraint(model, [t=1:T], generation[t] <= sum(available[t, i] for i in plants)) Adding linear constraints simplifies the usage with ParameterJuMP, but it comes at the cost of an extra constraint for every variable. We could consider using |
Thanks, I think we can handle these cases for some parameters and dispatch the implementation of the parameter according to parameter type and reduce the burden for the some of the most common case which is time series. There is also the case where there are parameters that are used in a long expression like the injections per node. |
Injections per node with long expressions: keep that. But for simple upper bounds, you can probably be cleverer in how you go about it. |
#899 will address point 1 by removing the use of ParameterJuMP and POI all together. |
IT might possible to speed up the state update in this block of code using Multithreading function _update_system_state!(sim::Simulation, model::EmulationModel)
sim_state = get_simulation_state(sim)
simulation_time = get_current_time(sim)
system_state = get_system_states(sim_state)
store = get_simulation_store(sim)
em_model_name = get_name(model)
for key in get_container_keys(get_optimization_container(model))
!should_write_resulting_value(key) && continue
update_system_state!(system_state, key, store, em_model_name, simulation_time)
end
IS.@record :execution StateUpdateEvent(simulation_time, em_model_name, "SystemState")
return
end
function _update_simulation_state!(sim::Simulation, model::EmulationModel)
# Order of these operations matters. Do not reverse.
# This will update the state with the results of the store first and then fill
# the remaning values with the decision state.
_update_system_state!(sim, model)
_update_system_state!(sim, get_name(model))
return
end
function _update_simulation_state!(sim::Simulation, model::DecisionModel)
model_name = get_name(model)
store = get_simulation_store(sim)
simulation_time = get_current_time(sim)
state = get_simulation_state(sim)
model_params = get_decision_model_params(store, model_name)
for field in fieldnames(DatasetContainer)
for key in list_decision_model_keys(store, model_name, field)
!has_dataset(get_decision_states(state), key) && continue
res = read_result(DenseAxisArray, store, model_name, key, simulation_time)
update_decision_state!(state, key, res, simulation_time, model_params)
end
end
IS.@record :execution StateUpdateEvent(simulation_time, model_name, "DecisionState")
return
end |
PM.replicate
will be costly and have a lot of data repetition._update_simulation_state!
uses a DataFrames interface to move results between the Store cache and the simulation state. This causes every read from the store to allocate a DataFrame. Potential fix: Use Array to read from the store, difficulty: ensure columns match. (Test show that this is not an issue)The text was updated successfully, but these errors were encountered: