Skip to content

Avoid curl fails due to server startup time #1418

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 12, 2024
Merged

Conversation

mikekgfb
Copy link
Contributor

Add sleep after server startup to make sure server ready prior to client request via curl

Add sleep after server startup to make sure server ready prior to client request via `curl`
Copy link

pytorch-bot bot commented Dec 11, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1418

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit d05f793 with merge base 7b86dc3 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 11, 2024
@Jack-Khuu
Copy link
Contributor

I like sleep

@mikekgfb
Copy link
Contributor Author

mikekgfb commented Dec 12, 2024

I like sleep

Yep, gives the service enough time to come up. =>

https://github.com/pytorch/torchchat/actions/runs/12301358926/job/34331782890?pr=1418

+ server_pid=383
  + sleep 90
  + python3 torchchat.py server stories15M
  NumExpr defaulting to 16 threads.
  PyTorch version 2.6.0.dev20241028+cu121 available.
  Using device=cuda NVIDIA A10G
  Loading model...
  Time to load model: 0.21 seconds
  -----------------------------------------------------------
   * Serving Flask app 'torchchat.usages.server'
   * Debug mode: off
  WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
   * Running on http://127.0.0.1:5000
  Press CTRL+C to quit
  + curl http://127.0.0.1:5000/v1/chat/completions -H 'Content-Type: application/json' -d '{
      "model": "stories15M",
      "stream": "true",
      "max_tokens": 200,
      "messages": [
        {
          "role": "system",
          "content": "You are a helpful assistant."
        },
        {
          "role": "user",
          "content": "Hello!"
        }
      ]
    }'
   === Completion Request ===
  [3127](https://github.com/pytorch/torchchat/actions/runs/12301358926/job/34331782890?pr=1418#step:14:3139).0.0.1 - - [12/Dec/2024 17:23:24] "POST /v1/chat/completions HTTP/1.1" 200 -
  data:{"id": "chatcmpl-945c812d-3491-48b4-90c6-3eb9c0996e57", "choices": [{"delta": {"role": "assistant", "content": "3"}}], "created": 1734024204, "model": "stories15M", "system_fingerprint": "cuda_torch.bfloat16", "object": "chat.completion.chunk"}

@Jack-Khuu Jack-Khuu merged commit 4dc2f89 into pytorch:main Dec 12, 2024
50 of 52 checks passed
vmpuri pushed a commit that referenced this pull request Feb 4, 2025
Add sleep after server startup to make sure server ready prior to client request via `curl`

Co-authored-by: Jack-Khuu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants