Skip to content

cmake: don't fail on GGML_CPU=OFF #11457

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 28, 2025
Merged

Conversation

someone13574
Copy link
Contributor

@someone13574 someone13574 commented Jan 27, 2025

This fixes an issue where ggml would fail to build if GGML_CPU was set to OFF. A cpu enabled build is still necessary to build the examples, but this allows the llama library itself to be built without the cpu backend.

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Jan 27, 2025
@slaren slaren merged commit 4bf3119 into ggml-org:master Jan 28, 2025
45 checks passed
@someone13574 someone13574 deleted the cpu-off-fix branch January 28, 2025 14:44
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants