1
- name : Low Severity Bugs
2
- description : Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
3
- title : " Bug : "
4
- labels : ["bug-unconfirmed", "low severity "]
1
+ name : Bug (model evaluation)
2
+ description : Something goes wrong when evaluating a model without any complex components such as the server on top.
3
+ title : " Eval bug : "
4
+ labels : ["bug-unconfirmed", "model evaluation "]
5
5
body :
6
6
- type : markdown
7
7
attributes :
8
8
value : >
9
9
Thanks for taking the time to fill out this bug report!
10
- Please include information about your system, the steps to reproduce the bug,
11
- and the version of llama.cpp that you are using .
12
- If you encountered the bug using a third-party frontend (e.g. ollama),
13
- please reproduce the bug using llama.cpp only .
10
+ This issue template is intended for bug reports where the model evaluation results
11
+ (i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation .
12
+ If you encountered the issue while using an external UI (e.g. ollama),
13
+ please reproduce your issue using one of the examples/binaries in this repository .
14
14
The `llama-cli` binary can be used for simple and reproducible model inference.
15
- - type : textarea
16
- id : what-happened
17
- attributes :
18
- label : What happened?
19
- description : >
20
- Please give us a summary of what happened.
21
- If the problem is not obvious: what did you expect to happen?
22
- placeholder : Tell us what you see!
23
- validations :
24
- required : true
25
- - type : textarea
26
- id : hardware
27
- attributes :
28
- label : Hardware
29
- description : Which CPUs/GPUs and which GGML backends are you using?
30
- placeholder : >
31
- e.g. Ryzen 5950X + RTX 4090 (CUDA)
32
- validations :
33
- required : true
34
15
- type : textarea
35
16
id : version
36
17
attributes :
37
18
label : Name and Version
38
- description : Which executable and which version of our software are you running? (use `--version` to get a version string)
19
+ description : Which version of our software are you running? (use `--version` to get a version string)
39
20
placeholder : |
40
21
$./llama-cli --version
41
22
version: 2999 (42b4109e)
45
26
- type : dropdown
46
27
id : operating-system
47
28
attributes :
48
- label : What operating system are you seeing the problem on ?
29
+ label : Which operating systems do you know to be affected ?
49
30
multiple : true
50
31
options :
51
32
- Linux
@@ -54,13 +35,29 @@ body:
54
35
- BSD
55
36
- Other? (Please let us know in description)
56
37
validations :
57
- required : false
38
+ required : true
39
+ - type : dropdown
40
+ id : backends
41
+ attributes :
42
+ label : GGML backends
43
+ description : Which GGML backends do you know to be affected?
44
+ options : [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
45
+ multiple : true
46
+ - type : textarea
47
+ id : hardware
48
+ attributes :
49
+ label : Hardware
50
+ description : Which CPUs/GPUs are you using?
51
+ placeholder : >
52
+ e.g. Ryzen 5950X + 2x RTX 4090
53
+ validations :
54
+ required : true
58
55
- type : textarea
59
56
id : model
60
57
attributes :
61
58
label : Model
62
59
description : >
63
- If applicable: which model at which quantization were you using when encountering the bug?
60
+ Which model at which quantization were you using when encountering the bug?
64
61
If you downloaded a GGUF file off of Huggingface, please provide a link.
65
62
placeholder : >
66
63
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
71
68
attributes :
72
69
label : Steps to Reproduce
73
70
description : >
74
- Please tell us how to reproduce the bug.
71
+ Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it .
75
72
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
76
73
that information would be very much appreciated by us.
77
74
placeholder : >
93
90
id : logs
94
91
attributes :
95
92
label : Relevant log output
96
- description : Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
93
+ description : >
94
+ Please copy and paste any relevant log output, including the command that you entered and any generated text.
95
+ This will be automatically formatted into code, so no need for backticks.
97
96
render : shell
97
+ validations :
98
+ required : true
0 commit comments