Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when creating customs chronics number exceeding 180 #462

Closed
AvisP opened this issue Jun 6, 2023 · 5 comments
Closed

Error when creating customs chronics number exceeding 180 #462

AvisP opened this issue Jun 6, 2023 · 5 comments

Comments

@AvisP
Copy link

AvisP commented Jun 6, 2023

Hi Benjamin,

I noticed an issue when I am trying to split_and_save more than 180 chronics at one go. If the following code is executed with nb_episode more than 180, then it fails

import grid2op
import re
import os
from grid2op.Reward import LinesCapacityReward  # or any other rewards
from lightsim2grid import LightSimBackend  # highly recommended !
from grid2op.Runner import Runner
from grid2op.Chronics import MultifolderWithCache  # highly recommended for training
import datetime

nb_episode = 200
nb_process = 1
verbose = True

env_name = "l2rpn_case14_sandbox"

env = grid2op.make(env_name,
                    reward_class=LinesCapacityReward,
                    backend=LightSimBackend(),
                    chronics_class=MultifolderWithCache)

env.chronics_handler.real_data.set_filter(lambda x: re.match(".*0*", x) is not None)
env.chronics_handler.real_data.reset()

runner_params = env.get_params_for_runner()
runner = Runner(**runner_params)

res = runner.run(path_save="./DN_logs/"+env_name+"/",
                nb_episode=nb_episode,
                nb_process=nb_process,
                # env_seeds=[i for i in range(nb_episode)],
                add_detailed_output=True,
                )
episode_info_beginning = {}
episode_info_end = {}
for _, chron_name, cum_reward, nb_time_step, max_ts, this_episode_data in res:
    msg_tmp = "chronics at: {}".format(chron_name)
    msg_tmp += "\ttotal score: {:.6f}".format(cum_reward)
    msg_tmp += "\ttime steps: {:.0f}/{:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)

    current_year = this_episode_data.observations[nb_time_step].year
    current_month = this_episode_data.observations[nb_time_step].month
    current_day = this_episode_data.observations[nb_time_step].day

    NextDay_Date = this_episode_data.observations[nb_time_step].get_time_stamp() + datetime.timedelta(days=1)

    episode_info_beginning[chron_name] = str(current_year)+'-'+str(current_month)+'-'+str(current_day)+" 00:00"
    episode_info_end[chron_name] = str(NextDay_Date.year)+'-'+str(NextDay_Date.month)+'-'+str(NextDay_Date.day)+" 00:00"

env.chronics_handler.real_data.split_and_save(episode_info_beginning, episode_info_end,
                          path_out=os.path.join("/......./Chronics_congestion/"+env_name+"_2/"))

with the following error message

Traceback (most recent call last):
  File "/.../DN_logs_gen_custom_chronics.py", line 54, in <module>
    env.chronics_handler.real_data.split_and_save(episode_info_beginning, episode_info_end,
  File "/.../venvs/L2PRN/lib/python3.10/site-packages/grid2op/Chronics/multiFolder.py", line 746, in split_and_save
    tmp.split_and_save(
  File "/.../venvs/L2PRN/lib/python3.10/site-packages/grid2op/Chronics/gridStateFromFile.py", line 1146, in split_and_save
    curr_dt, *_ = tmp.load_next()
  File "/.../venvs/L2PRN/lib/python3.10/site-packages/grid2op/Chronics/gridStateFromFile.py", line 783, in load_next
    raise StopIteration
StopIteration

The issue has been created based on the comments suggested in 447 which can now be possibly closed out...

@BDonnot
Copy link
Collaborator

BDonnot commented Jul 10, 2023

Hello,

Sorry for not answering sooner, I am mainly focused on L2RPN competitions at the moment.

I have few questions:

  1. why do you use the "MultifolderWithCache" ? As you can see on the doc it's only relevant when you want to train your agent and one of the bottlneck of the training is the hard drive. None of this applies here.
  2. how many scenarios have you selected with the set_filter(lambda x: re.match(".*0*", x) is not None) ? If this is 179 or 180 then I suspect the bug is that you cannot save twice the same episode with different characteristics
  3. if you want to have always the same results (no randomness) you can make a class that inherits from GymEnv (if you use Gym / Gymnasium) and in the "reset" function you also call the "seed" function. This might be useful for the evaluation, but it is a terrible idea for the training

@AvisP
Copy link
Author

AvisP commented Jul 13, 2023

Hi Benjamin,

Thanks for your response.

  1. Yes true I don't need the MultifolderWithCache, I removed it.
  2. I have selected about 1000 scenarios. I realized the filter is not serving any purpose so I removed it as well.
  3. That is an interesting idea and can probably use it for issue Timesteps for DoNothing agent for Custom Chronics not replicated #463 where I am trying to get consistent results during evaluation on 36 bus without any attack scenario?

I am trying to retrieve the chronic information on the last day where environment terminates using a DoNothing agent through this script. The original chronics are defined for 8064 timesteps but this one will produce chronics of 288 timesteps (1 day). I will subsequently use these extracted chronics for evaluation of agent performance on daily basis.

The code fails because of a check where self.tmp_max_index has a value of 8065 and self.current_index accumulates until it exceeds it. Removing the check would probably solve my issue but not sure if it will cause other problems...
https://github.com/rte-france/Grid2Op/blob/b9969fdb16a020258aeca839e855c7197b067f42/grid2op/Chronics/gridStateFromFile.py#L840

If you can get a quick look at this would appreciate it. Thanks!

@BDonnot
Copy link
Collaborator

BDonnot commented Jul 14, 2023

Hello,

This check is here because if the index exceeds the size of the csv there is no more data to retrieve. And an index error will be raised by python anyway.

I'm working on a solution, that would allow to read data directly using EpisodeData (see dev there https://github.com/bdonnot/grid2op/tree/ts_from_episodedata) it's a work in progress but it looks promising from the first development.

I need to consolidate the tests and finish coding the opponent and it will be usable.

Basically : you run whatever agent (for now only do nothing...) on an episode using a runner. You retrieve the EpisodeData and then you can use this to do whatever you want.

Future devs will include the possibility to use multiple EpisodeData, or a folder of experiments saved from a runner.

@BDonnot
Copy link
Collaborator

BDonnot commented Jul 14, 2023

But just to be clear.

Once you have split your data, the do nothing agent of the split data does not match the one from the original data, that's true.
But if you set the seeds, all do nothing on the split data will behave the same way.

@BDonnot
Copy link
Collaborator

BDonnot commented Aug 25, 2023

Hello,

There is now a solution: save your data as episode data, and then init an environment with the FromMultiEpisodeData "chronics" class and the FromEpisodeDataOpponent opponent class: see the base of the doc here https://grid2op.readthedocs.io/en/dev_1.9.4/chronics.html#grid2op.Chronics.FromMultiEpisodeData

To benefit from these feature, you need to install the dev version of grid2op from github:

pip instal git+https://github.com/rte-france/grid2op.git@dev_1.9.4

@BDonnot BDonnot closed this as completed Sep 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants