Skip to content
This repository was archived by the owner on May 6, 2021. It is now read-only.

Commit eed2b71

Browse files
authored
Update README.md
1 parent f9373e0 commit eed2b71

File tree

1 file changed

+20
-45
lines changed

1 file changed

+20
-45
lines changed

README.md

Lines changed: 20 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,7 @@ pkg> add ReinforcementLearningEnvironments
1212

1313
## API
1414

15-
| Method | Description |
16-
| :--- | :--------- |
17-
| `observe(env, observer=:default)` | Return the observation of `env` from the view of `observer`|
18-
| `reset!(env)` | Reset `env` to an initial state|
19-
| `interact!(env, action)` | Send `action` to `env`. For some multi-agent environments, `action` can be a dictionary of actions from different agents|
20-
| **Optional Methods** | |
21-
| `action_space(env)` | Return the action space of `env` |
22-
| `observation_space(env)` | Return the observation space of `env`|
23-
| `render(env)` | Show the current state of environment |
15+
All the environments here are supposed to have implemented the [`AbstractEnvironment`](https://github.com/JuliaReinforcementLearning/ReinforcementLearningBase.jl/blob/9205f6d7bdde5d17a5d2baedefcf8a1854b40698/src/interface.jl#L230-L261) related interfaces in [ReinforcementLearningBase.jl](https://github.com/JuliaReinforcementLearning/ReinforcementLearningBase.jl).
2416

2517
## Supported Environments
2618

@@ -32,51 +24,34 @@ By default, only some basic environments are installed. If you want to use some
3224
- MountainCarEnv
3325
- ContinuousMountainCarEnv
3426
- PendulumEnv
35-
- MDPEnv
36-
- POMDPEnv
37-
- DiscreteMazeEnv
38-
- SimpleMDPEnv
39-
- deterministic_MDP
40-
- absorbing_deterministic_tree_MDP
41-
- stochastic_MDP
42-
- stochastic_tree_MDP
43-
- deterministic_tree_MDP_with_rand_reward
44-
- deterministic_tree_MDP
45-
- deterministic_MDP
4627

4728
### 3-rd Party Environments
4829

4930
| Environment Name | Dependent Package Name | Description |
5031
| :--- | :--- | :--- |
51-
| `AtariEnv` | [ArcadeLearningEnvironment.jl](https://github.com/JuliaReinforcementLearning/ArcadeLearningEnvironment.jl) | |
32+
| `AtariEnv` | [ArcadeLearningEnvironment.jl](https://github.com/JuliaReinforcementLearning/ArcadeLearningEnvironment.jl) | Tested only on Linux|
5233
| `ViZDoomEnv` | [ViZDoom.jl](https://github.com/JuliaReinforcementLearning/ViZDoom.jl) | Currently only a basic environment is supported. (By calling `basic_ViZDoom_env()`)|
53-
| `GymEnv` | [PyCall.jl](https://github.com/JuliaPy/PyCall.jl) | You need to manually install `gym` first |
54-
| `HanabiEnv` | [Hanabi.jl](https://github.com/JuliaReinforcementLearning/Hanabi.jl) | Hanabi is a turn based multi-player environment, the API is slightly different from the environments above.|
34+
| `GymEnv` | [PyCall.jl](https://github.com/JuliaPy/PyCall.jl) | You need to manually install `gym` first in Python |
35+
| `MDPEnv`,`POMDPEnv`| [POMDPs.jl](https://github.com/JuliaPOMDP/POMDPs.jl)| The `get_observation_space` method is undefined|
36+
| `OpenSpielEnv` | [OpenSpiel.jl](https://github.com/JuliaReinforcementLearning/OpenSpiel.jl) | (WIP) |
5537

56-
**TODO:**
38+
## Usage
5739

58-
- [ ] Box2d (Investigating)
59-
- [ ] Bullet (Investigating)
60-
61-
How to enable 3-rd party environments?
40+
```julia
41+
julia> using ReinforcementLearningEnvironments
6242

63-
Take the `AtariEnv` for example:
43+
julia> using ReinforcementLearningBase
6444

65-
1. Install this package by:
66-
```julia
67-
pkg> add ReinforcementLearningEnvironments
68-
```
69-
2. Install corresponding dependent package by:
70-
```julia
71-
pkg> add ArcadeLearningEnvironment
72-
```
73-
3. Using the above two packages:
74-
```julia
75-
using ReinforcementLearningEnvironments
76-
using ArcadeLearningEnvironment
77-
env = AtariEnv("pong")
78-
```
45+
julia> env = CartPoleEnv()
46+
CartPoleEnv{Float64}(gravity=9.8,masscart=1.0,masspole=0.1,totalmass=1.1,halflength=0.5,polemasslength=0.05,forcemag=10.0,tau=0.02,thetathreshold=0.20943951023931953,xthreshold=2.4,max_steps=200)
7947

80-
## Style Guide
48+
julia> action_space = get_action_space(env)
49+
DiscreteSpace{UnitRange{Int64}}(1:2)
8150

82-
We favor the [YASGuide](https://github.com/jrevels/YASGuide) style guide.
51+
julia> while true
52+
action = rand(action_space)
53+
env(action)
54+
obs = observe(env)
55+
get_terminal(obs) && break
56+
end
57+
```

0 commit comments

Comments
 (0)