Skip to content

llama-bench : fix unexpected global variable initialize sequence issue #11832

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 14, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions examples/llama-bench/llama-bench.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -876,8 +876,8 @@ static std::vector<cmd_params_instance> get_cmd_params_instances(const cmd_param
struct test {
static const std::string build_commit;
static const int build_number;
static const std::string cpu_info;
static const std::string gpu_info;
const std::string cpu_info;
const std::string gpu_info;
std::string model_filename;
std::string model_type;
uint64_t model_size;
Expand All @@ -903,7 +903,10 @@ struct test {
std::string test_time;
std::vector<uint64_t> samples_ns;

test(const cmd_params_instance & inst, const llama_model * lmodel, const llama_context * ctx) {
test(const cmd_params_instance & inst, const llama_model * lmodel, const llama_context * ctx) :
cpu_info(get_cpu_info()),
gpu_info(get_gpu_info()) {

model_filename = inst.model;
char buf[128];
llama_model_desc(lmodel, buf, sizeof(buf));
Expand Down Expand Up @@ -1058,8 +1061,6 @@ struct test {

const std::string test::build_commit = LLAMA_COMMIT;
const int test::build_number = LLAMA_BUILD_NUMBER;
const std::string test::cpu_info = get_cpu_info();
const std::string test::gpu_info = get_gpu_info();

struct printer {
virtual ~printer() {}
Expand Down