Skip to content

Commit c2cdab2

Browse files
yinggehmc-nv
authored andcommitted
feat: Report histogram metrics to Triton metrics server (#58)
1 parent 843cbdd commit c2cdab2

File tree

3 files changed

+403
-0
lines changed

3 files changed

+403
-0
lines changed

README.md

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -202,6 +202,71 @@ you need to specify a different `shm-region-prefix-name` for each server. See
202202
[here](https://github.com/triton-inference-server/python_backend#running-multiple-instances-of-triton-server)
203203
for more information.
204204

205+
## Triton Metrics
206+
Starting with the 24.08 release of Triton, users can now obtain specific
207+
vLLM metrics by querying the Triton metrics endpoint (see complete vLLM metrics
208+
[here](https://docs.vllm.ai/en/latest/serving/metrics.html)). This can be
209+
accomplished by launching a Triton server in any of the ways described above
210+
(ensuring the build code / container is 24.08 or later) and querying the server.
211+
Upon receiving a successful response, you can query the metrics endpoint by entering
212+
the following:
213+
```bash
214+
curl localhost:8002/metrics
215+
```
216+
VLLM stats are reported by the metrics endpoint in fields that are prefixed with
217+
`vllm:`. Triton currently supports reporting of the following metrics from vLLM.
218+
```bash
219+
# Number of prefill tokens processed.
220+
counter_prompt_tokens
221+
# Number of generation tokens processed.
222+
counter_generation_tokens
223+
# Histogram of time to first token in seconds.
224+
histogram_time_to_first_token
225+
# Histogram of time per output token in seconds.
226+
histogram_time_per_output_token
227+
```
228+
Your output for these fields should look similar to the following:
229+
```bash
230+
# HELP vllm:prompt_tokens_total Number of prefill tokens processed.
231+
# TYPE vllm:prompt_tokens_total counter
232+
vllm:prompt_tokens_total{model="vllm_model",version="1"} 10
233+
# HELP vllm:generation_tokens_total Number of generation tokens processed.
234+
# TYPE vllm:generation_tokens_total counter
235+
vllm:generation_tokens_total{model="vllm_model",version="1"} 16
236+
# HELP vllm:time_to_first_token_seconds Histogram of time to first token in seconds.
237+
# TYPE vllm:time_to_first_token_seconds histogram
238+
vllm:time_to_first_token_seconds_count{model="vllm_model",version="1"} 1
239+
vllm:time_to_first_token_seconds_sum{model="vllm_model",version="1"} 0.03233122825622559
240+
vllm:time_to_first_token_seconds_bucket{model="vllm_model",version="1",le="0.001"} 0
241+
vllm:time_to_first_token_seconds_bucket{model="vllm_model",version="1",le="0.005"} 0
242+
...
243+
vllm:time_to_first_token_seconds_bucket{model="vllm_model",version="1",le="+Inf"} 1
244+
# HELP vllm:time_per_output_token_seconds Histogram of time per output token in seconds.
245+
# TYPE vllm:time_per_output_token_seconds histogram
246+
vllm:time_per_output_token_seconds_count{model="vllm_model",version="1"} 15
247+
vllm:time_per_output_token_seconds_sum{model="vllm_model",version="1"} 0.04501533508300781
248+
vllm:time_per_output_token_seconds_bucket{model="vllm_model",version="1",le="0.01"} 14
249+
vllm:time_per_output_token_seconds_bucket{model="vllm_model",version="1",le="0.025"} 15
250+
...
251+
vllm:time_per_output_token_seconds_bucket{model="vllm_model",version="1",le="+Inf"} 15
252+
```
253+
To enable vLLM engine colleting metrics, "disable_log_stats" option need to be either false
254+
or left empty (false by default) in [model.json](https://github.com/triton-inference-server/vllm_backend/blob/main/samples/model_repository/vllm_model/1/model.json).
255+
```bash
256+
"disable_log_stats": false
257+
```
258+
*Note:* vLLM metrics are not reported to Triton metrics server by default
259+
due to potential performance slowdowns. To enable vLLM model's metrics
260+
reporting, please add following lines to its config.pbtxt as well.
261+
```bash
262+
parameters: {
263+
key: "REPORT_CUSTOM_METRICS"
264+
value: {
265+
string_value:"yes"
266+
}
267+
}
268+
```
269+
205270
## Referencing the Tutorial
206271

207272
You can read further in the
Lines changed: 164 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,164 @@
1+
# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
#
3+
# Redistribution and use in source and binary forms, with or without
4+
# modification, are permitted provided that the following conditions
5+
# are met:
6+
# * Redistributions of source code must retain the above copyright
7+
# notice, this list of conditions and the following disclaimer.
8+
# * Redistributions in binary form must reproduce the above copyright
9+
# notice, this list of conditions and the following disclaimer in the
10+
# documentation and/or other materials provided with the distribution.
11+
# * Neither the name of NVIDIA CORPORATION nor the names of its
12+
# contributors may be used to endorse or promote products derived
13+
# from this software without specific prior written permission.
14+
#
15+
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
16+
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17+
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
18+
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
19+
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
20+
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
21+
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
22+
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
23+
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
24+
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
25+
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
26+
27+
import os
28+
import re
29+
import sys
30+
import unittest
31+
from functools import partial
32+
33+
import requests
34+
import tritonclient.grpc as grpcclient
35+
from tritonclient.utils import *
36+
37+
sys.path.append("../../common")
38+
from test_util import TestResultCollector, UserData, callback, create_vllm_request
39+
40+
41+
class VLLMTritonMetricsTest(TestResultCollector):
42+
def setUp(self):
43+
self.triton_client = grpcclient.InferenceServerClient(url="localhost:8001")
44+
self.tritonserver_ipaddr = os.environ.get("TRITONSERVER_IPADDR", "localhost")
45+
self.vllm_model_name = "vllm_opt"
46+
self.prompts = [
47+
"The most dangerous animal is",
48+
"The capital of France is",
49+
"The future of AI is",
50+
]
51+
self.sampling_parameters = {"temperature": "0", "top_p": "1"}
52+
53+
def get_vllm_metrics(self):
54+
"""
55+
Store vllm metrics in a dictionary.
56+
"""
57+
r = requests.get(f"http://{self.tritonserver_ipaddr}:8002/metrics")
58+
r.raise_for_status()
59+
60+
# Regular expression to match the pattern
61+
pattern = r"^(vllm:[^ {]+)(?:{.*})? ([0-9.-]+)$"
62+
vllm_dict = {}
63+
64+
# Find all matches in the text
65+
matches = re.findall(pattern, r.text, re.MULTILINE)
66+
67+
for match in matches:
68+
key, value = match
69+
vllm_dict[key] = float(value) if "." in value else int(value)
70+
71+
return vllm_dict
72+
73+
def vllm_infer(
74+
self,
75+
prompts,
76+
sampling_parameters,
77+
model_name,
78+
):
79+
"""
80+
Helper function to send async stream infer requests to vLLM.
81+
"""
82+
user_data = UserData()
83+
number_of_vllm_reqs = len(prompts)
84+
85+
self.triton_client.start_stream(callback=partial(callback, user_data))
86+
for i in range(number_of_vllm_reqs):
87+
request_data = create_vllm_request(
88+
prompts[i],
89+
i,
90+
False,
91+
sampling_parameters,
92+
model_name,
93+
True,
94+
)
95+
self.triton_client.async_stream_infer(
96+
model_name=model_name,
97+
inputs=request_data["inputs"],
98+
request_id=request_data["request_id"],
99+
outputs=request_data["outputs"],
100+
parameters=sampling_parameters,
101+
)
102+
103+
for _ in range(number_of_vllm_reqs):
104+
result = user_data._completed_requests.get()
105+
if type(result) is InferenceServerException:
106+
print(result.message())
107+
self.assertIsNot(type(result), InferenceServerException, str(result))
108+
109+
output = result.as_numpy("text_output")
110+
self.assertIsNotNone(output, "`text_output` should not be None")
111+
112+
self.triton_client.stop_stream()
113+
114+
def test_vllm_metrics(self):
115+
# Test vLLM metrics
116+
self.vllm_infer(
117+
prompts=self.prompts,
118+
sampling_parameters=self.sampling_parameters,
119+
model_name=self.vllm_model_name,
120+
)
121+
metrics_dict = self.get_vllm_metrics()
122+
123+
# vllm:prompt_tokens_total
124+
self.assertEqual(metrics_dict["vllm:prompt_tokens_total"], 18)
125+
# vllm:generation_tokens_total
126+
self.assertEqual(metrics_dict["vllm:generation_tokens_total"], 48)
127+
128+
# vllm:time_to_first_token_seconds
129+
self.assertEqual(metrics_dict["vllm:time_to_first_token_seconds_count"], 3)
130+
self.assertGreater(metrics_dict["vllm:time_to_first_token_seconds_sum"], 0)
131+
self.assertEqual(metrics_dict["vllm:time_to_first_token_seconds_bucket"], 3)
132+
# vllm:time_per_output_token_seconds
133+
self.assertEqual(metrics_dict["vllm:time_per_output_token_seconds_count"], 45)
134+
self.assertGreater(metrics_dict["vllm:time_per_output_token_seconds_sum"], 0)
135+
self.assertEqual(metrics_dict["vllm:time_per_output_token_seconds_bucket"], 45)
136+
137+
def test_vllm_metrics_disabled(self):
138+
# Test vLLM metrics
139+
self.vllm_infer(
140+
prompts=self.prompts,
141+
sampling_parameters=self.sampling_parameters,
142+
model_name=self.vllm_model_name,
143+
)
144+
metrics_dict = self.get_vllm_metrics()
145+
146+
# No vLLM metric found
147+
self.assertEqual(len(metrics_dict), 0)
148+
149+
def test_vllm_metrics_refused(self):
150+
# Test vLLM metrics
151+
self.vllm_infer(
152+
prompts=self.prompts,
153+
sampling_parameters=self.sampling_parameters,
154+
model_name=self.vllm_model_name,
155+
)
156+
with self.assertRaises(requests.exceptions.ConnectionError):
157+
self.get_vllm_metrics()
158+
159+
def tearDown(self):
160+
self.triton_client.close()
161+
162+
163+
if __name__ == "__main__":
164+
unittest.main()

0 commit comments

Comments
 (0)