Skip to content

[Distributed Inference] Make torch run work for torchchat and fix TP bugs #877

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jul 2, 2024

Conversation

fduwjj
Copy link
Contributor

@fduwjj fduwjj commented Jul 2, 2024

Somehow in TorchChat, we only set device to be "cuda" which makes everyone use cuda:0 and leads to CUDA OOM when it comes to checkpoint loading. And now I can run all the way until the prompt is showing up. But somehow we now need to enter so many times for each rank so this is something we need to solve next.

Also for TP part, we need to use TP not sequence parallel like what we did for training.

To test torchrun DI, one can just run ./distributed/run_dist_inference.sh to run the DI program

Copy link

pytorch-bot bot commented Jul 2, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/877

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 655ea0f with merge base c716548 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 2, 2024
@fduwjj fduwjj requested review from lessw2020 and kartikayk July 2, 2024 18:22
@fduwjj fduwjj changed the title [Dist][Inference] U-haul TP and distribute utils code to TorchChat [Distributed Inference] Make torch run work for torchchat and fix TP bugs Jul 2, 2024
Copy link
Contributor

@lessw2020 lessw2020 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for adding, esp for the OOM (device 0) fix.
tiny nit to update the one tp comment and remove ref to seq parallel since it's not being used now.

@fduwjj fduwjj merged commit 7973c2a into main Jul 2, 2024
51 checks passed
vmpuri pushed a commit that referenced this pull request Jul 8, 2024
…bugs (#877)

* [Distributed Inference] Make torch run work for torchchat
malfet pushed a commit that referenced this pull request Jul 17, 2024
…bugs (#877)

* [Distributed Inference] Make torch run work for torchchat
malfet pushed a commit that referenced this pull request Jul 17, 2024
…bugs (#877)

* [Distributed Inference] Make torch run work for torchchat
malfet pushed a commit that referenced this pull request Jul 17, 2024
…bugs (#877)

* [Distributed Inference] Make torch run work for torchchat
malfet pushed a commit that referenced this pull request Jul 17, 2024
…bugs (#877)

* [Distributed Inference] Make torch run work for torchchat
malfet pushed a commit that referenced this pull request Jul 17, 2024
…bugs (#877)

* [Distributed Inference] Make torch run work for torchchat
malfet pushed a commit that referenced this pull request Jul 17, 2024
…bugs (#877)

* [Distributed Inference] Make torch run work for torchchat
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants