Skip to content

llama-tts refactor console output #12640

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 31, 2025
Merged

Conversation

marcoStocchi
Copy link
Contributor

The output of llama tokens is now performed using LOG_INF instead of printf(). This way, using the option '--log-disable' will prevent printing the llama-tokens as well; while using the '--log-file' option will write them to the correct log file.

* tts.cpp : llama tokens console output is done using LOG_INF instead of printf(). Therefore the options '--log-disable' and '--log-file' have now uniform impact on all output.
@ggerganov ggerganov merged commit 52de2e5 into ggml-org:master Mar 31, 2025
48 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants