You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Documents/Training-PPO.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -23,14 +23,14 @@ The example [Getting Started with the 3D Balance Ball Environment](Getting-Start
23
23
## Explanation of fields in the inspector
24
24
We use similar parameters as in Unity ML-Agents. If something is confusing, read see their [document](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-PPO.md) for mode datails.
25
25
26
-
### TrainerPPO.cs
26
+
####TrainerPPO.cs
27
27
*`isTraining`: Toggle this to switch between training and inference mode. Note that if isTraining if false when the game starts, the training part of the PPO model will not be initialize and you won't be able to train it in this run. Also,
28
28
*`parameters`: You need to assign this field with a TrainerParamsPPO scriptable object.
29
29
*`continueFromCheckpoint`: If true, when the game starts, the trainer will try to load the saved checkpoint file to resume previous training.
30
30
*`checkpointPath`: the path of the checkpoint, including the file name.
31
31
*`steps`: Just to show you the current step of the training.
32
32
33
-
### TrainerParamsPPO
33
+
####TrainerParamsPPO
34
34
*`learningRate`: Learning rate used to train the neural network.
35
35
*`maxTotalSteps`: Max steps the trainer will be training.
36
36
*`saveModelInterval`: The trained model will be saved every this amount of steps.
@@ -45,12 +45,12 @@ We use similar parameters as in Unity ML-Agents. If something is confusing, read
45
45
*`numEpochPerTrain`: For each training, the data in the buffer will be used repeatedly this amount of times.
46
46
*`useHeuristicChance`: See [Training with Heuristics](#training-with-heuristics).
47
47
48
-
### RLModelPPO.cs
48
+
####RLModelPPO.cs
49
49
*`checkpointToLoad`: If you assign a model's saved checkpoint file to it, this will be loaded when model is initialized, regardless of the trainer's loading. Might be used when you are not using a trainer.
50
50
*`Network`: You need to assign this field with a scriptable object that implements RLNetworkPPO.cs.
51
51
*`optimizer`: The time of optimizer to use for this model when training. You can also set its parameters here.
52
52
53
-
### RLNetworkSimpleAC
53
+
####RLNetworkSimpleAC
54
54
This is a simple implementation of RLNetworkAC that you can create a plug it in as a neural network definition for any RLModelPPO. PPO uses actor/critic structure(See PPO algorithm).
55
55
-`actorHiddenLayers`/`criticHiddenLayers`: Hidden layers of the network. The array size if the number of hidden layers. In each element, there are for parameters that defines each layer. Those do not have default values, so you have to fill them.
0 commit comments