Skip to content

ci: use sccache on windows instead of ccache #11545

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jan 31, 2025

Conversation

ochafik
Copy link
Collaborator

@ochafik ochafik commented Jan 31, 2025

As advised on https://github.com/hendrikmuhs/ccache-action

Fixes the Windows SYCL build caused by #11516 (example failure)

(Cc/ @ggerganov ref)

@github-actions github-actions bot added the devops improvements to build systems and github actions label Jan 31, 2025
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Jan 31, 2025
@ochafik ochafik marked this pull request as ready for review January 31, 2025 15:06
@ochafik ochafik mentioned this pull request Jan 31, 2025
@ochafik ochafik requested review from ggerganov and slaren and removed request for ggerganov and slaren January 31, 2025 15:38
@slaren
Copy link
Member

slaren commented Jan 31, 2025

Please let the second HIP run finish to make sure that it is working. Considering the size of the cache I suspect that sccache may not be compatible with the HIP compiler.

Save cache using key "sccache-windows-latest-cmake-hip-2025-01-31T15:17:55.700Z".
"C:\Program Files\Git\usr\bin\tar.exe" --posix -cf cache.tzst --exclude cache.tzst -P -C D:/a/llama.cpp/llama.cpp --files-from manifest.txt --force-local --use-compress-program "zstd -T0"
Cache Size: ~8 MB (7945703 B)
Cache saved successfully

@ochafik ochafik requested review from slaren and removed request for ggerganov January 31, 2025 15:46
@ochafik
Copy link
Collaborator Author

ochafik commented Jan 31, 2025

Please let the second HIP run finish to make sure that it is working

@slaren It's finally green!

Considering the size of the cache I suspect that sccache may not be compatible with the HIP compiler.

Urgh good catch, we'll need to play w/ CMAKE_HIP_COMPILER_LAUNCHER (ROCm/ROCm#2817), will send separately.

Copy link
Member

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ccache seems to work with HIP (https://github.com/ggerganov/llama.cpp/actions/runs/13012980277/job/36294999875), so if you cannot make sccache work it can always be reverted to ccache.

@ochafik ochafik merged commit aa6fb13 into ggml-org:master Jan 31, 2025
45 checks passed
@ochafik ochafik deleted the ci-win-sscache branch January 31, 2025 17:23
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
* Use sccache on ci for windows

* Detect sccache in cmake
orca-zhang pushed a commit to orca-zhang/llama.cpp that referenced this pull request Feb 26, 2025
* Use sccache on ci for windows

* Detect sccache in cmake
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
* Use sccache on ci for windows

* Detect sccache in cmake
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
* Use sccache on ci for windows

* Detect sccache in cmake
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devops improvements to build systems and github actions ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants