Skip to content

Commit

Permalink
docs: improve package description (#260)
Browse files Browse the repository at this point in the history
  • Loading branch information
rickstaa authored Jun 22, 2023
1 parent 143bf04 commit eabfef3
Show file tree
Hide file tree
Showing 4 changed files with 6 additions and 27 deletions.
4 changes: 0 additions & 4 deletions .github/workflows/stable_learning_control.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,6 @@ jobs:
fail-fast: false # Run all matrix jobs
matrix:
python-version: [3.8, 3.9, "3.10"] # Supported python versions
# permissions:
# contents: read
# pull-requests: write # Needed for codeconv to write on pull requests
steps:
- name: Checkout stable-learning-control repository
uses: actions/checkout@v3
Expand All @@ -83,4 +80,3 @@ jobs:
uses: codecov/codecov-action@v3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

25 changes: 4 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,32 +3,15 @@
[![Baysian Learning Control CI](https://github.com/rickstaa/stable-learning-control/actions/workflows/stable_learning_control.yml/badge.svg)](https://github.com/rickstaa/stable-learning-control/actions/workflows/stable_learning_control.yml)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/rickstaa/stable-learning-control)](https://github.com/rickstaa/stable-learning-control/releases)
[![Python 3](https://img.shields.io/badge/Python->=3.8-brightgreen)](https://www.python.org/)
[![codecov](https://codecov.io/gh/rickstaa/stable-learning-control/branch/main/graph/badge.svg?token=RFM3OELQ3L)](https://codecov.io/gh/rickstaa/stable-learning-control)
[![codecov](https://codecov.io/gh/rickstaa/stable-learning-control/branch/main/graph/badge.svg?token=4SAME74CJ7)](https://codecov.io/gh/rickstaa/stable-learning-control)
[![Contributions](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](CONTRIBUTING.md)

## Package Overview

Welcome to the Stable Learning Control (SLC) framework! The Stable Learning Control framework enables you to automatically create, train and deploy various safe (stable and robust) Reinforcement Learning (RL) and Imitation learning (IL) control algorithms directly from real-world data. This framework is made up of four main modules:
The Stable Learning Control (SLC) framework is a collection of robust Reinforcement Learning control algorithms designed to ensure stability. These algorithms are built upon the Lyapunov actor-critic architecture introduced by [Han et al. 2020](http://arxiv.org/abs/2004.14288). They guarantee stability and robustness by leveraging [Lyapunov stability theory](https://en.wikipedia.org/wiki/Lyapunov_stability). These algorithms are specifically tailored for use with [gymnasium environments](https://gymnasium.farama.org/) that feature a positive definite cost function. Several ready-to-use compatible environments can be found in the [stable-gym](https://github.com/rickstaa/stable-gym) package.

* [Modeling](./stable_learning_control/modeling): Module that uses state of the art System Identification and State Estimation techniques to create an [gymnasium environment](https://gymnasium.farama.org/) out of real data.
* [Control](./stable_learning_control/control): Module used to train several [Stable Learning Control](https://rickstaa.github.io/stable-learning-control/control/control.html) RL/IL agents on the built [gymnasium](https://gymnasium.farama.org/) environments.
* [Hardware](./stable_learning_control/hardware): Module that can be used to deploy the trained RL/IL agents onto the hardware of your choice.

This framework follows a code structure similar to the [Spinningup](https://spinningup.openai.com/en/latest/) educational package. By doing this, we hope to make it easier for new researchers to get started with our Algorithms. If you are new to RL, you are therefore highly encouraged first to check out the SpinningUp documentation and play with before diving into our codebase. Our implementation sometimes deviates from the [Spinningup](https://spinningup.openai.com/en/latest/) version to increase code maintainability, extensibility and readability.

## Clone the repository

Since the repository contains several git submodules to use all the features, it needs to be cloned using the `--recurse-submodules` argument:

```bash
git clone --recurse-submodules https://github.com/rickstaa/stable-learning-control.git
```

If you already cloned the repository and forgot the `--recurse-submodule` argument you can pull the submodules using the following git command:

```bash
git submodule update --init --recursive
```
> **Note**
> This framework follows a code structure similar to the [Spinningup](https://spinningup.openai.com/en/latest/) educational package. By doing this, we aim to make it easier for new researchers to start with our Algorithms. If you are new to RL, you are therefore highly encouraged to check out the SpinningUp documentation and play with it before diving into our codebase. Our implementation sometimes deviates from the [Spinningup](https://spinningup.openai.com/en/latest/) version to increase code maintainability, extensibility and readability.
## Installation and Usage

Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "stable-learning-control",
"version": "4.0.0",
"description": "A framework for learning stable reinforcement learning policies.",
"description": "A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms.",
"keywords": [
"reinforcement-learning",
"simulation",
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ authors = [
{name = "Rick Staa", email = "[email protected]"}
]
license = {file = "LICENSE"}
description = "A package that contains several gymnasium environments with cost functions compatible with (stable) RL agents (i.e. positive definite)."
description = "A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms."
keywords = [
"reinforcement-learning",
"simulation",
Expand Down

0 comments on commit eabfef3

Please sign in to comment.