Skip to content

Commit dacabcd

Browse files
mikekgfbJack-Khuu
andauthored
Update README.md to run and query server during test (#1384)
* Update README.md to run and query server 1 - Run server: 1a - in background 1b - capture server_pid 2 - enable query using curl 3 - shutdown server with server pid captured in server_pid * Punctuation in README.md Fix a punctuation issue in README. While this is a valid change to improve language, it is really a decoy to trigger rerunning a test that failed due to a SEV. * Extend timeout for run-readme-pr-mps.yml Readme run on M1 with MPS takes over 30 minutes, and may be hitting default timeout. Extending timeout. --------- Co-authored-by: Jack-Khuu <[email protected]>
1 parent 8782542 commit dacabcd

File tree

2 files changed

+6
-3
lines changed

2 files changed

+6
-3
lines changed

.github/workflows/run-readme-pr-mps.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ jobs:
1010
uses: pytorch/test-infra/.github/workflows/macos_job.yml@main
1111
with:
1212
runner: macos-m1-14
13+
timeout-minutes: 50
1314
script: |
1415
conda create -y -n test-readme-mps-macos python=3.10.11 llvm-openmp
1516
conda activate test-readme-mps-macos

README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -231,6 +231,8 @@ python3 torchchat.py server llama3.1
231231
```
232232
[skip default]: end
233233

234+
[shell default]: python3 torchchat.py server llama3.1 & server_pid=$!
235+
234236
In another terminal, query the server using `curl`. Depending on the model configuration, this query might take a few minutes to respond.
235237

236238
> [!NOTE]
@@ -244,8 +246,6 @@ Setting `stream` to "true" in the request emits a response in chunks. If `stream
244246

245247
**Example Input + Output**
246248

247-
[skip default]: begin
248-
249249
```
250250
curl http://127.0.0.1:5000/v1/chat/completions \
251251
-H "Content-Type: application/json" \
@@ -265,12 +265,14 @@ curl http://127.0.0.1:5000/v1/chat/completions \
265265
]
266266
}'
267267
```
268+
[skip default]: begin
268269
```
269270
{"response":" I'm a software developer with a passion for building innovative and user-friendly applications. I have experience in developing web and mobile applications using various technologies such as Java, Python, and JavaScript. I'm always looking for new challenges and opportunities to learn and grow as a developer.\n\nIn my free time, I enjoy reading books on computer science and programming, as well as experimenting with new technologies and techniques. I'm also interested in machine learning and artificial intelligence, and I'm always looking for ways to apply these concepts to real-world problems.\n\nI'm excited to be a part of the developer community and to have the opportunity to share my knowledge and experience with others. I'm always happy to help with any questions or problems you may have, and I'm looking forward to learning from you as well.\n\nThank you for visiting my profile! I hope you find my information helpful and interesting. If you have any questions or would like to discuss any topics, please feel free to reach out to me. I"}
270271
```
271272

272273
[skip default]: end
273274

275+
[shell default]: kill ${server_pid}
274276

275277
</details>
276278

@@ -664,6 +666,6 @@ awesome libraries and tools you've built around local LLM inference.
664666

665667
torchchat is released under the [BSD 3 license](LICENSE). (Additional
666668
code in this distribution is covered by the MIT and Apache Open Source
667-
licenses.) However you may have other legal obligations that govern
669+
licenses.) However, you may have other legal obligations that govern
668670
your use of content, such as the terms of service for third-party
669671
models.

0 commit comments

Comments
 (0)