Skip to content

Commit 037259b

Browse files
authored
llama : make load error reporting more granular (#5477)
Makes it easier to pinpoint where e.g. `unordered_map::at: key not found` comes from.
1 parent 2639789 commit 037259b

File tree

1 file changed

+15
-3
lines changed

1 file changed

+15
-3
lines changed

llama.cpp

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4384,9 +4384,21 @@ static int llama_model_load(const std::string & fname, llama_model & model, llam
43844384

43854385
model.hparams.vocab_only = params.vocab_only;
43864386

4387-
llm_load_arch (ml, model);
4388-
llm_load_hparams(ml, model);
4389-
llm_load_vocab (ml, model);
4387+
try {
4388+
llm_load_arch(ml, model);
4389+
} catch(const std::exception & e) {
4390+
throw std::runtime_error("error loading model architecture: " + std::string(e.what()));
4391+
}
4392+
try {
4393+
llm_load_hparams(ml, model);
4394+
} catch(const std::exception & e) {
4395+
throw std::runtime_error("error loading model hyperparameters: " + std::string(e.what()));
4396+
}
4397+
try {
4398+
llm_load_vocab(ml, model);
4399+
} catch(const std::exception & e) {
4400+
throw std::runtime_error("error loading model vocabulary: " + std::string(e.what()));
4401+
}
43904402

43914403
llm_load_print_meta(ml, model);
43924404

0 commit comments

Comments
 (0)