How do I use multi gpu setups? #2132
FileDotZip
started this conversation in
General
Replies: 1 comment
-
https://github.com/ggerganov/llama.cpp/tree/master/examples/main |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a intel scalable gpu server, with 6x Nvidia P40 video cards with 24GB of VRAM each.
How can I specify for llama.cpp to use as much vram as it needs from this cluster of gpu's? Does it automatically do it?
I am following this guide at step 6
Beta Was this translation helpful? Give feedback.
All reactions