-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue when executing redispatching #87
Comments
EDITED the issue to call Current output:
Expected output :
|
Hello, With the latest grid2op version (that we'll release probably tuesday or wednesday), this problem is "fixed". We put in the parameters some flag to activate / deactivate the possibility to perform redispatching action on turned off generators and another one to ignore However, it appears that this action will not be performed regardless. Indeed, due to ramping, if you want to increase the hydro of 10 MW you need other generators to absorbs that, and that's not possible on the grid. I'll send a piece of code when it will be ready. |
Basically, the problem, once you can indeed ignore the downtimes and all these stuff, that the grid modification are so sharp (always saturating something, either pmin, pmax, the ramp min or ramp max) that it's a real issue to use redispatching on this condition. You need first to start up some generator, but not too high otherwise you'll consume all your ramps. And problem is not grid2op, but rather the fact that in your case, everything is already at maximum. So you need to be careful and study each time step... |
But once you redispatch a unit, the others are not modified at the same time in grid2op? or you (as a user) need to decrease the MW of the others gens? |
Hello, So this week-end, after quite some work, I finally manage to get a reasonable behavior of the redispatching in this case, whithout impacting too much the current one (and whithout increase too much the computation time). Long story short, the "algorithm" that handles redispatching is now directly a solver, and the redispatching is cast into a minimization problem. If you run the following, slightly adapted, script: import grid2op
import numpy as np
import pdb
import sys
env = grid2op.make('wcci_test')
np.set_printoptions(precision=2)
# Run Game from specific ts
# Compare redispatching VS DoNothing
t0, tf = 7163, 7168
# Run Redispatching
env.chronics_handler.tell_id(183)
obs = env.reset()
print("Env preparation (start enough generators)")
# stop before the time step to start up some generators (you can modify)
id_before = 10
env.fast_forward_chronics(t0-id_before)
obs = env.get_obs()
# start each generator that can be started
gen_disp_orig = [2, 3, 10, 13, 16] # some dispatchable generators
ratio = 0.3 # fine tune to max out the production available
array_before = [(el, ratio * env.gen_max_ramp_up[el]) for el in gen_disp_orig]
act2 = env.action_space({'redispatch': array_before})
for i in range(id_before):
# print(act2)
prev_p = obs.prod_p
obs, reward, done, info = env.step(act2)
this_p = obs.prod_p
print("I turned on +{:.2f}MW before starting to ramp up the dump".format(np.sum(obs.prod_p[gen_disp_orig])))
print('\nRedispatching results === ->action +10MW every time step')
dispatch_val = 10
new_ratio = - ratio * dispatch_val / np.sum(act2._redispatch)
array_before = [(el, new_ratio * env.gen_max_ramp_up[el]) for el in gen_disp_orig]
act2 = env.action_space({'redispatch': array_before})
print("actual dispatch init: {}".format(obs.actual_dispatch[gen_disp_orig]))
for i in range(tf-t0+1):
action = env.action_space({'redispatch': [(8, dispatch_val)]})
if np.sum(obs.prod_p[gen_disp_orig]) > 0.:
action += act2
prev_p = obs.prod_p
obs, reward, done, info = env.step(action)
print("done: {}".format(done))
this_p = obs.prod_p
print('\tHydro MW: {:2f} MW at ts {:1d}'.format(obs.prod_p[8], t0+i))
print('\tenv target dispatch for hydro {}'.format(obs.target_dispatch[8]))
if info["exception"]:
print (info['exception']) you get the following result:
This is not perfect, but that's a problem in the original data. If you want more redispatching, you can change the value of the variable
which is approximately the desired behaviour. NB Now redispatching acts a bit differently that before. If there was an actual dispatch of -10.5 for some reason (ie this generator compensate the dispatch made on others) at a certain time step, and you ask +10 on the same generator at the same time step, the target value will now be NB This behaviour will be pushed on version 0.9, most likely Tuesday. |
Fixed (if we can say...) in version 0.9.0. |
MultimixEnv: Add forward to all envs for chunk_size and chronics tell_id
I have experienced a weird behavior when played with redispatching. In the following code, I tried to compare do nothing actions against redispatch a hydro unit as follows:
As you will see in the results, at time step 7163, 7164, 7165 the re-dispatching is executing normal. In my opinion, the problem arises at time step 7165 where I get an exception and after this, because according to the dispatch the units is 0 [MW], the module interprets like he cannot redispatching a unit because is "off" but it is not because I have been applying the action to increase 10 MW every time step.
The cumulative values behaves normal until 7165 where effectively the unit 8 should be around 50 MW, but after this, it gives a incorrect value.
The text was updated successfully, but these errors were encountered: