Skip to content

Commit 599b3e0

Browse files
GitHub: ask for more info in issue templates (#10426)
* GitHub: ask for more info in issues [no ci] * refactor issue templates to be component-specific * more understandable issue description * add dropdown for llama.cpp module
1 parent c18610b commit 599b3e0

File tree

10 files changed

+252
-203
lines changed

10 files changed

+252
-203
lines changed

.github/ISSUE_TEMPLATE/01-bug-low.yml

Lines changed: 0 additions & 50 deletions
This file was deleted.
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
name: Bug (compilation)
2+
description: Something goes wrong when trying to compile llama.cpp.
3+
title: "Compile bug: "
4+
labels: ["bug-unconfirmed", "compilation"]
5+
body:
6+
- type: markdown
7+
attributes:
8+
value: >
9+
Thanks for taking the time to fill out this bug report!
10+
This issue template is intended for bug reports where the compilation of llama.cpp fails.
11+
Before opening an issue, please confirm that the compilation still fails with `-DGGML_CCACHE=OFF`.
12+
If the compilation succeeds with ccache disabled you should be able to permanently fix the issue
13+
by clearing `~/.cache/ccache` (on Linux).
14+
- type: textarea
15+
id: commit
16+
attributes:
17+
label: Git commit
18+
description: Which commit are you trying to compile?
19+
placeholder: |
20+
$git rev-parse HEAD
21+
84a07a17b1b08cf2b9747c633a2372782848a27f
22+
validations:
23+
required: true
24+
- type: dropdown
25+
id: operating-system
26+
attributes:
27+
label: Which operating systems do you know to be affected?
28+
multiple: true
29+
options:
30+
- Linux
31+
- Mac
32+
- Windows
33+
- BSD
34+
- Other? (Please let us know in description)
35+
validations:
36+
required: true
37+
- type: dropdown
38+
id: backends
39+
attributes:
40+
label: GGML backends
41+
description: Which GGML backends do you know to be affected?
42+
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
43+
multiple: true
44+
- type: textarea
45+
id: steps_to_reproduce
46+
attributes:
47+
label: Steps to Reproduce
48+
description: >
49+
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
50+
If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us.
51+
placeholder: >
52+
Here are the exact commands that I used: ...
53+
validations:
54+
required: true
55+
- type: textarea
56+
id: first_bad_commit
57+
attributes:
58+
label: First Bad Commit
59+
description: >
60+
If the bug was not present on an earlier version: when did it start appearing?
61+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
62+
validations:
63+
required: false
64+
- type: textarea
65+
id: logs
66+
attributes:
67+
label: Relevant log output
68+
description: >
69+
Please copy and paste any relevant log output, including the command that you entered and any generated text.
70+
This will be automatically formatted into code, so no need for backticks.
71+
render: shell
72+
validations:
73+
required: true
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
name: Bug (model use)
2+
description: Something goes wrong when using a model (in general, not specific to a single llama.cpp module).
3+
title: "Eval bug: "
4+
labels: ["bug-unconfirmed", "model evaluation"]
5+
body:
6+
- type: markdown
7+
attributes:
8+
value: >
9+
Thanks for taking the time to fill out this bug report!
10+
This issue template is intended for bug reports where the model evaluation results
11+
(i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation.
12+
If you encountered the issue while using an external UI (e.g. ollama),
13+
please reproduce your issue using one of the examples/binaries in this repository.
14+
The `llama-cli` binary can be used for simple and reproducible model inference.
15+
- type: textarea
16+
id: version
17+
attributes:
18+
label: Name and Version
19+
description: Which version of our software are you running? (use `--version` to get a version string)
20+
placeholder: |
21+
$./llama-cli --version
22+
version: 2999 (42b4109e)
23+
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
24+
validations:
25+
required: true
26+
- type: dropdown
27+
id: operating-system
28+
attributes:
29+
label: Which operating systems do you know to be affected?
30+
multiple: true
31+
options:
32+
- Linux
33+
- Mac
34+
- Windows
35+
- BSD
36+
- Other? (Please let us know in description)
37+
validations:
38+
required: true
39+
- type: dropdown
40+
id: backends
41+
attributes:
42+
label: GGML backends
43+
description: Which GGML backends do you know to be affected?
44+
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
45+
multiple: true
46+
- type: textarea
47+
id: hardware
48+
attributes:
49+
label: Hardware
50+
description: Which CPUs/GPUs are you using?
51+
placeholder: >
52+
e.g. Ryzen 5950X + 2x RTX 4090
53+
validations:
54+
required: true
55+
- type: textarea
56+
id: model
57+
attributes:
58+
label: Model
59+
description: >
60+
Which model at which quantization were you using when encountering the bug?
61+
If you downloaded a GGUF file off of Huggingface, please provide a link.
62+
placeholder: >
63+
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
64+
validations:
65+
required: false
66+
- type: textarea
67+
id: steps_to_reproduce
68+
attributes:
69+
label: Steps to Reproduce
70+
description: >
71+
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
72+
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
73+
that information would be very much appreciated by us.
74+
placeholder: >
75+
e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
76+
When I use -ngl 0 it works correctly.
77+
Here are the exact commands that I used: ...
78+
validations:
79+
required: true
80+
- type: textarea
81+
id: first_bad_commit
82+
attributes:
83+
label: First Bad Commit
84+
description: >
85+
If the bug was not present on an earlier version: when did it start appearing?
86+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
87+
validations:
88+
required: false
89+
- type: textarea
90+
id: logs
91+
attributes:
92+
label: Relevant log output
93+
description: >
94+
Please copy and paste any relevant log output, including the command that you entered and any generated text.
95+
This will be automatically formatted into code, so no need for backticks.
96+
render: shell
97+
validations:
98+
required: true
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
name: Bug (misc.)
2+
description: Something is not working the way it should (and it's not covered by any of the above cases).
3+
title: "Misc. bug: "
4+
labels: ["bug-unconfirmed"]
5+
body:
6+
- type: markdown
7+
attributes:
8+
value: >
9+
Thanks for taking the time to fill out this bug report!
10+
This issue template is intended for miscellaneous bugs that don't fit into any other category.
11+
If you encountered the issue while using an external UI (e.g. ollama),
12+
please reproduce your issue using one of the examples/binaries in this repository.
13+
- type: textarea
14+
id: version
15+
attributes:
16+
label: Name and Version
17+
description: Which version of our software are you running? (use `--version` to get a version string)
18+
placeholder: |
19+
$./llama-cli --version
20+
version: 2999 (42b4109e)
21+
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
22+
validations:
23+
required: true
24+
- type: dropdown
25+
id: operating-system
26+
attributes:
27+
label: Which operating systems do you know to be affected?
28+
multiple: true
29+
options:
30+
- Linux
31+
- Mac
32+
- Windows
33+
- BSD
34+
- Other? (Please let us know in description)
35+
validations:
36+
required: true
37+
- type: dropdown
38+
id: module
39+
attributes:
40+
label: Which llama.cpp modules do you know to be affected?
41+
multiple: true
42+
options:
43+
- libllama (core library)
44+
- llama-cli
45+
- llama-server
46+
- llama-bench
47+
- llama-quantize
48+
- Python/Bash scripts
49+
- Other (Please specify in the next section)
50+
validations:
51+
required: true
52+
- type: textarea
53+
id: steps_to_reproduce
54+
attributes:
55+
label: Steps to Reproduce
56+
description: >
57+
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
58+
validations:
59+
required: true
60+
- type: textarea
61+
id: first_bad_commit
62+
attributes:
63+
label: First Bad Commit
64+
description: >
65+
If the bug was not present on an earlier version: when did it start appearing?
66+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
67+
validations:
68+
required: false
69+
- type: textarea
70+
id: logs
71+
attributes:
72+
label: Relevant log output
73+
description: >
74+
Please copy and paste any relevant log output, including the command that you entered and any generated text.
75+
This will be automatically formatted into code, so no need for backticks.
76+
render: shell
77+
validations:
78+
required: true

.github/ISSUE_TEMPLATE/02-bug-medium.yml

Lines changed: 0 additions & 50 deletions
This file was deleted.

.github/ISSUE_TEMPLATE/05-enhancement.yml renamed to .github/ISSUE_TEMPLATE/020-enhancement.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
name: Enhancement
2-
description: Used to request enhancements for llama.cpp
2+
description: Used to request enhancements for llama.cpp.
33
title: "Feature Request: "
44
labels: ["enhancement"]
55
body:

0 commit comments

Comments
 (0)