You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*Note:* The vLLM metrics reporting is disabled by default due to potential
228
-
performance slowdowns. To enable vLLM model's metrics reporting, please add
229
-
following lines to its config.pbtxt.
227
+
To enable vLLM engine colleting metrics, "disable_log_stats" option need to be either false
228
+
or left empty (false by default) in [model.json](https://github.com/triton-inference-server/vllm_backend/blob/main/samples/model_repository/vllm_model/1/model.json).
229
+
```bash
230
+
"disable_log_stats": false
231
+
```
232
+
*Note:* vLLM metrics are not reported to Triton metrics server by default
233
+
due to potential performance slowdowns. To enable vLLM model's metrics
234
+
reporting, please add following lines to its config.pbtxt as well.
0 commit comments