Skip to content

[SYCL] Fix build on Windows when ccache enabled (#9954) #9976

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Mar 21, 2025

Conversation

MakeDecisionWorth
Copy link
Contributor

@NeoZhangJianyu
Copy link
Collaborator

@shou692199
How much time be reduced by CCache in your case?

@MakeDecisionWorth
Copy link
Contributor Author

@shou692199 How much time be reduced by CCache in your case?

About 5 sec on my 16C32T computer.

Copy link
Collaborator

@Rbiessy Rbiessy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this change. This seems to reduce the cache hit on Unix. Just compiling llama-bench on the main branch I get:

$ ccache -s
Cacheable calls:   192 /  288 (66.67%)
  Hits:            144 /  192 (75.00%)
    Direct:        144 /  144 (100.0%)
    Preprocessed:    0 /  144 ( 0.00%)
  Misses:           48 /  192 (25.00%)
Uncacheable calls:  96 /  288 (33.33%)
Local storage:
  Cache size (GB): 0.0 / 30.0 ( 0.01%)
  Hits:            144 /  192 (75.00%)
  Misses:           48 /  192 (25.00%)

but with this patch this falls to 50%:

$ ccache -s
Cacheable calls:    96 /  144 (66.67%)
  Hits:             48 /   96 (50.00%)
    Direct:         48 /   48 (100.0%)
    Preprocessed:    0 /   48 ( 0.00%)
  Misses:           48 /   96 (50.00%)
Uncacheable calls:  48 /  144 (33.33%)
Local storage:
  Cache size (GB): 0.0 / 30.0 ( 0.01%)
  Hits:             48 /   96 (50.00%)
  Misses:           48 /   96 (50.00%)

It is curious that this compiler_type is not needed on Unix. Maybe it is safer to only set it for Windows?

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Jan 5, 2025
Copy link
Collaborator

@Rbiessy Rbiessy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'm not sure what is the impact of setting icl as the compiler type but it should be fine. I'm happy if @NeoZhangJianyu also is.

@Rbiessy
Copy link
Collaborator

Rbiessy commented Mar 20, 2025

I fixed some conflict, I'm planning to merge it tomorrow.

@NeoZhangJianyu NeoZhangJianyu merged commit 1aa87ee into ggml-org:master Mar 21, 2025
48 checks passed
Ivy233 pushed a commit to Ivy233/llama.cpp that referenced this pull request Mar 23, 2025
…-org#9976)

* [SYCL] Fix build on Windows when ccache enabled (ggml-org#9954)

* take effect only on windows and force it to icl

---------

Co-authored-by: Romain Biessy <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants