Skip to content

Commit a71be84

Browse files
authored
Update Training-PPO.md
1 parent 0b18269 commit a71be84

File tree

1 file changed

+14
-15
lines changed

1 file changed

+14
-15
lines changed

Documents/Training-PPO.md

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ The example [Getting Started with the 3D Balance Ball Environment](Getting-Start
1111
3. Change the BrainType of your brain to `InternalTrainable` in inspector.
1212
2. Create a Trainer
1313
1. Attach a `TrainerPPO.cs` to any GameObject.
14-
2. Create a `TrainerParamsPPO` scriptable object with proper parameters in your project and assign it to the Params field in `TrainerPPO.cs`.
14+
2. Create a `TrainerParamsPPO` scriptable object with proper parameters in your project(in project window selelct `Create/ml-agent/ppo/TrainerParamsPPO`), and assign it to the Params field in `TrainerPPO.cs`.
1515
3. Assign the Trainer to the `Trainer` field of your Brain.
1616
3. Create a Model
1717
1. Attach a `RLModelPPO.cs` to any GameObject.
18-
2. Create a `RLNetworkSimpleAC` scriptable with proper object in your project and assign it to the Network field in `RLModelPPO.cs`.
18+
2. Create a `RLNetworkSimpleAC` scriptable with proper parameters in your project(in project window selelct `Create/ml-agent/ppo/RLNetworkSimpleAC`), and assign it to the Network field in `RLModelPPO.cs`.
1919
3. Assign the created Model to the `modelRef` field of in `TrainerPPO.cs`
2020

2121
4. Play and see how it works.
@@ -27,28 +27,35 @@ We use similar parameters as in Unity ML-Agents. If something is confusing, read
2727
* `isTraining`: Toggle this to switch between training and inference mode. Note that if isTraining if false when the game starts, the training part of the PPO model will not be initialize and you won't be able to train it in this run. Also,
2828
* `parameters`: You need to assign this field with a TrainerParamsPPO scriptable object.
2929
* `continueFromCheckpoint`: If true, when the game starts, the trainer will try to load the saved checkpoint file to resume previous training.
30-
* `checkpointPath`: the path of the checkpoint, including the file name.
31-
* `steps`: Just to show you the current step of the training.
30+
* `checkpointPath`: The path of the checkpoint directory.
31+
* `checkpointFileName`: The name of the checkpoint file
32+
* `steps`: Just to show you the current step of the training. You can also change it in the training if you want.
3233

3334
#### TrainerParamsPPO
3435
* `learningRate`: Learning rate used to train the neural network.
3536
* `maxTotalSteps`: Max steps the trainer will be training.
3637
* `saveModelInterval`: The trained model will be saved every this amount of steps.
38+
* `logInterval`: How many traing steps between each logging.
3739
* `rewardDiscountFactor`: Gamma. See PPO algorithm for details.
3840
* `rewardGAEFactor`: Lambda. See PPO algorithm for details.
3941
* `valueLossWeight`: Weight of the value loss compared with the policy loss in PPO.
4042
* `timeHorizon`: Max steps when the PPO trainer will calculate the advantages using the collected data.
4143
* `entropyLossWeight`: Weight of the entropy loss.
4244
* `clipEpsilon`: See PPO algorithm for details. The default value is usually fine.
45+
* `clipValueLoss`: Clipping factor in value loss. The default value is usually fine.
4346
* `batchSize`: Mini batch size when training.
4447
* `bufferSizeForTrain`: PPO will train the model once when the buffer size reaches this.
45-
* `numEpochPerTrain`: For each training, the data in the buffer will be used repeatedly this amount of times.
46-
* `useHeuristicChance`: See [Training with Heuristics](#training-with-heuristics).
48+
* `numEpochPerTrain`: For each training, the data in the buffer will be used repeatedly this amount of times. Unity uses 3 by default.
49+
* `finalActionClip`: The final action passed to the agents will be clipped based on this value. Unity uses 3 by default.
50+
* `finalActionDownscale`: The final action passed to the agents will be downscaled based on this value. Unity uses 3 by default.
4751

4852
#### RLModelPPO.cs
4953
* `checkpointToLoad`: If you assign a model's saved checkpoint file to it, this will be loaded when model is initialized, regardless of the trainer's loading. Might be used when you are not using a trainer.
54+
* `modelName`: The name of the model. It is used for the namescope When buliding the neural network. Can be empty by default.
55+
* `weightSaveMode`: This decides the names of the weights of neural network when saving to checkpoints as serialized dictionary. No need to changes this ususally.
5056
* `Network`: You need to assign this field with a scriptable object that implements RLNetworkPPO.cs.
5157
* `optimizer`: The time of optimizer to use for this model when training. You can also set its parameters here.
58+
* `useInputNormalization`: Whether automatically normalize vector observations.(See Unity's [Doc](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-ML-Agents.md#training-config-file))
5259

5360
#### RLNetworkSimpleAC
5461
This is a simple implementation of RLNetworkAC that you can create a plug it in as a neural network definition for any RLModelPPO. PPO uses actor/critic structure(See PPO algorithm).
@@ -59,15 +66,7 @@ This is a simple implementation of RLNetworkAC that you can create a plug it in
5966
- activationFunction: Which activation function to use. Usually Relu.
6067
- `actorOutputLayerInitialScale`/`criticOutputLayerInitialScale`/`visualEncoderInitialScale`: Initial scale of the weights of the output layers.
6168
- `actorOutputLayerBias`/`criticOutputLayerBias`/`visualEncoderBias`: Whether use bias.
62-
63-
## Training with Heuristics
64-
If you already know some policy that is better than random policy, you might give it as a hint to PPO to increase the training a bit.
65-
66-
1. Implement the [AgentDependentDeicision](AgentDependentDeicision.md) for your policy and attach it to the agents that you want them to occasionally use this policy.
67-
2. In your trainer parameters, set `useHeuristicChance` to larger than 0.
68-
3. Use [TrainerParamOverride](TrainerParamOverride.md) to decrease the `useHeuristicChance` over time during the training.
69-
70-
Note that your AgentDependentDeicision is only used in training mode. The chance of using it in each step for agent with the script attached depends on `useHeuristicChance`.
69+
- `shareEncoder`: Whether the actior/critic network shares the encoded weights. In Unity ML-Agents, this is set to be true for discrete actions space and true for continuous action space.
7170

7271
## Create your own neural network architecture
7372
If you want to have your own neural network architecture instead of the one provided by [`RLNetworkSimpleAC`](#rlnetworksimpleac), you can inherit `RLNetworkAC` class to build your own neural network. See the [sourcecode](https://github.com/tcmxx/UnityTensorflowKeras/blob/tcmxx/docs/Assets/UnityTensorflow/Learning/PPO/TrainerPPO.cs) of `RLNetworkAC.cs` for documentation.

0 commit comments

Comments
 (0)