Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to version 1.6.4 #260

Merged
merged 32 commits into from
Nov 8, 2021
Merged

Upgrade to version 1.6.4 #260

merged 32 commits into from
Nov 8, 2021

Conversation

BDonnot
Copy link
Collaborator

@BDonnot BDonnot commented Nov 8, 2021

Some quality of life features and minor speed improvments

Breaking changes

  • the name of the python file for the "agent" module are now lowercase (complient with PEP). If you
    did things like from grid2op.Agent.BaseAgent import BaseAgent you need to change it like
    from grid2op.Agent.baseAgent import BaseAgent or even better, and this is the preferred way to include
    them: from grid2op.Agent import BaseAgent It should not affect lots of code.

Fixed issues

  • a bug where the shunt had a voltage when disconnected using pandapower backend
  • a bug preventing to print the action space if some "part" of it had no size (empty action space)
  • a bug preventing to copy an action properly (especially for the alarm)
  • a bug that did not "close" the backend of the observation space when the environment was closed. This
    might be related to Dangling references prevent GC to collect instances #255

New features

  • serialization of current_iter and max_iter in the observation.
  • the possibility to use the runner only on certain episode id
    (see runner.run(..., episode_id=[xxx, yyy, ...]))
  • a function that returns if an action has any change to modify the grid see act.can_affect_something()
  • a ttype of agent that performs predefined actions from a given list
  • basic support for logging in environment and runner (more coming soon)
  • possibility to make an environment with an implementation of a reward, instead of relying on a reward class.
  • a possible implementation of a N-1 reward

Improvements

  • right time stamp is now set in the observation after the game over.
  • correct current number of steps when the observation is set to a game over state.
  • documentation to clearly state that the action_class should not be modified.
  • possibility to tell which chronics to use with the result of env.chronics_handler.get_id() (this is also
    compatible in the runner)
  • it is no more possible to call "env.reset()" or "env.step()" after an environment has been closed: a clean error
    is raised in this case.

BDonnot and others added 30 commits September 7, 2021 13:35
…e over, adding compatibility to set crhonics with string id
…an environment with a given reward, instead of a reward class
Merge into master to prepare for release of 1.6.4
@BDonnot BDonnot merged commit 52d2a45 into master Nov 8, 2021
@BDonnot BDonnot deleted the dev_1.6.4 branch December 12, 2022 09:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant