RLCard is a toolkit for Reinforcement Learning (RL) in card games. It supports multiple card environments with easy-to-use interfaces. The goal of RLCard is to bridge reinforcement learning and imperfect information games, and push forward the research of reinforcement learning in domains with multiple agents, large state and action space, and sparse reward. RLCard is developed by DATA Lab at Texas A&M University.
- Official Website: http://www.rlcard.org
- Paper: https://arxiv.org/abs/1910.04376
News:
- New game Gin Rummy available. Thanks for the contribution of @billh0420.
- PyTorch implementation available. Thanks for the contribution of @mjudell.
- We have just initialized a list of Awesome-Game-AI resources. Check it out!
If you find this repo useful, you may cite:
@article{zha2019rlcard,
title={RLCard: A Toolkit for Reinforcement Learning in Card Games},
author={Zha, Daochen and Lai, Kwei-Herng and Cao, Yuanpu and Huang, Songyi and Wei, Ruzhe and Guo, Junyu and Hu, Xia},
journal={arXiv preprint arXiv:1910.04376},
year={2019}
}
Make sure that you have Python 3.5+ and pip installed. We recommend installing rlcard
with pip
as follow:
git clone https://github.com/datamllab/rlcard.git
cd rlcard
pip install -e .
or use PyPI with:
pip install rlcard
To use tensorflow implementation, run:
pip install rlcard[tensorflow]
To try out PyTorch implementation for DQN and NFSP, please run:
pip install rlcard[torch]
If you meet any problems when installing PyTorch with the command above, you may follow the instructions on PyTorch official website to manually install PyTorch.
Please refer to examples/. A short example is as below.
import rlcard
from rlcard.agents.random_agent import RandomAgent
env = rlcard.make('blackjack')
env.set_agents([RandomAgent(action_num=env.action_num)])
trajectories, payoffs = env.run()
We also recommend the following toy examples.
- Playing with random agents
- Deep-Q learning on Blackjack
- Training CFR on Leduc Hold'em
- Having fun with pretrained Leduc model
- Leduc Hold'em as single-agent environment
- Running multiple processes
Run examples/leduc_holdem_human.py
to play with the pre-trained Leduc Hold'em model. Leduc Hold'em is a simplified version of Texas Hold'em. Rules can be found here.
>> Leduc Hold'em pre-trained model
>> Start a new game!
>> Agent 1 chooses raise
=============== Community Card ===============
┌─────────┐
│░░░░░░░░░│
│░░░░░░░░░│
│░░░░░░░░░│
│░░░░░░░░░│
│░░░░░░░░░│
│░░░░░░░░░│
│░░░░░░░░░│
└─────────┘
=============== Your Hand ===============
┌─────────┐
│J │
│ │
│ │
│ ♥ │
│ │
│ │
│ J│
└─────────┘
=============== Chips ===============
Yours: +
Agent 1: +++
=========== Actions You Can Choose ===========
0: call, 1: raise, 2: fold
>> You choose action (integer):
Please refer to the Documents for general introductions. API documents are available at our website.
We provide a complexity estimation for the games on several aspects. InfoSet Number: the number of information sets; InfoSet Size: the average number of states in a single information set; Action Size: the size of the action space. Name: the name that should be passed to rlcard.make
to create the game environment. We also provide the link to the documentation and the random example.
Game | InfoSet Number | InfoSet Size | Action Size | Name | Usage |
---|---|---|---|---|---|
Blackjack (wiki, baike) | 10^3 | 10^1 | 10^0 | blackjack | doc, example |
Leduc Hold’em (paper) | 10^2 | 10^2 | 10^0 | leduc-holdem | doc, example |
Limit Texas Hold'em (wiki, baike) | 10^14 | 10^3 | 10^0 | limit-holdem | doc, example |
Dou Dizhu (wiki, baike) | 10^53 ~ 10^83 | 10^23 | 10^4 | doudizhu | doc, example |
Simple Dou Dizhu (wiki, baike) | - | - | - | simple-doudizhu | doc, example |
Mahjong (wiki, baike) | 10^121 | 10^48 | 10^2 | mahjong | doc, example |
No-limit Texas Hold'em (wiki, baike) | 10^162 | 10^3 | 10^4 | no-limit-holdem | doc, example |
UNO (wiki, baike) | 10^163 | 10^10 | 10^1 | uno | doc, example |
Gin Rummy (wiki, baike) | - | - | - | gin-rummy | doc, example |
The perfomance is measured by winning rates through tournaments. Example outputs are as follows:
The purposes of the main modules are listed as below:
- /examples: Examples of using RLCard.
- /docs: Documentation of RLCard.
- /tests: Testing scripts for RLCard.
- /rlcard/agents: Reinforcement learning algorithms and human agents.
- /rlcard/envs: Environment wrappers (state representation, action encoding etc.)
- /rlcard/games: Various game engines.
- /rlcard/models: Model zoo including pre-trained models and rule models.
- rlcard.make(env_id, config={}): Make an environment.
env_id
is a string of a environment;config
is a dictionary specifying some environment configurations, which are as follows.allow_step_back
: DefualtFalse
.True
if allowingstep_back
function to traverse backward in the tree.allow_raw_data
: DefaultFalse
.True
if allowing raw data in thestate
.single_agent_mode
: DefaultFalse
.True
if using single agent mode, i.e., Gym style interface with other players as pretrained/rule models.active_player
: Defualt0
. Ifsingle_agent_mode
isTrue
,active_player
will specify operating on which player in single agent mode.record_action
: DefaultFalse
. IfTrue
, a field ofaction_record
will be in thestate
to record the historical actions. This may be used for human-agent play.
- env.init_game(): Initialize a game. Return the state and the first player ID.
- env.step(action, raw_action=False): Take one step in the environment.
action
can be raw action or integer;raw_action
should beTrue
if the action is raw action (string). - env.step_back(): Available only when
allow_step_back
isTrue
. Take one step backward. This can be used for algorithms that operate on the game tree, such as CFR. - env.get_payoffs(): In the end of the game, return a list of payoffs for all the players.
- env.get_perfect_information(): (Currently only support some of the games) Obtain the perfect information at the current state.
- env.set_agents(agents):
agents
is a list ofAgent
object. The length of the the list should equal to the number of the player in the game. - env.run(is_training=False): Run a complete game and return trajectories and payoffs. The function can be used after the
set_agents
is called. Ifis_training
isTrue
, the function will usestep
function in the agent to play the game. Ifis_training
isFalse
,eval_step
will be called instead. - State Definition: State will always have observation
state['obs']
and legal actionsstate['legal_actions']
. Ifallow_raw_data
isTrue
, state will have raw observationstate['raw_obs']
and raw legal actionsstate['raw_legal_actions']
.
For basic usage, env.set_agents
and env.run()
are a good chioce. For advanced useage, you may also play the game step be step with env.init_game()
and env.step()
.
Contribution to this project is greatly appreciated! Please create an issue for feedbacks/bugs. If you want to contribute codes, please refer to Contributing Guide.
We would like to thank JJ World Network Technology Co.,LTD for the generous support.