-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when creating customs chronics number exceeding 180 #462
Comments
Hello, Sorry for not answering sooner, I am mainly focused on L2RPN competitions at the moment. I have few questions:
|
Hi Benjamin, Thanks for your response.
I am trying to retrieve the chronic information on the last day where environment terminates using a DoNothing agent through this script. The original chronics are defined for 8064 timesteps but this one will produce chronics of 288 timesteps (1 day). I will subsequently use these extracted chronics for evaluation of agent performance on daily basis. The code fails because of a check where If you can get a quick look at this would appreciate it. Thanks! |
Hello, This check is here because if the index exceeds the size of the csv there is no more data to retrieve. And an index error will be raised by python anyway. I'm working on a solution, that would allow to read data directly using EpisodeData (see dev there https://github.com/bdonnot/grid2op/tree/ts_from_episodedata) it's a work in progress but it looks promising from the first development. I need to consolidate the tests and finish coding the opponent and it will be usable. Basically : you run whatever agent (for now only do nothing...) on an episode using a runner. You retrieve the EpisodeData and then you can use this to do whatever you want. Future devs will include the possibility to use multiple EpisodeData, or a folder of experiments saved from a runner. |
But just to be clear. Once you have split your data, the do nothing agent of the split data does not match the one from the original data, that's true. |
Hello, There is now a solution: save your data as episode data, and then init an environment with the To benefit from these feature, you need to install the dev version of grid2op from github:
|
Hi Benjamin,
I noticed an issue when I am trying to split_and_save more than 180 chronics at one go. If the following code is executed with
nb_episode
more than 180, then it failswith the following error message
The issue has been created based on the comments suggested in 447 which can now be possibly closed out...
The text was updated successfully, but these errors were encountered: