Skip to content

Commit 8f38ad6

Browse files
refactor issue templates to be component-specific
1 parent 9543d01 commit 8f38ad6

File tree

9 files changed

+173
-327
lines changed

9 files changed

+173
-327
lines changed
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
name: Bug (compilation)
2+
description: Something goes wrong when trying to compile llama.cpp.
3+
title: "Compile bug: "
4+
labels: ["bug-unconfirmed", "compilation"]
5+
body:
6+
- type: markdown
7+
attributes:
8+
value: >
9+
Thanks for taking the time to fill out this bug report!
10+
This issue template is intended for bug reports where the compilation of llama.cpp fails.
11+
Before opening an issue, please confirm that the compilation still fails with `-DGGML_CCACHE=OFF`.
12+
If the compilation succeeds with ccache disabled you should be able to permanently fix the issue
13+
by clearing `~/.cache/ccache` (on Linux).
14+
- type: textarea
15+
id: commit
16+
attributes:
17+
label: Git commit
18+
description: Which commit are you trying to compile?
19+
placeholder: |
20+
$git rev-parse HEAD
21+
84a07a17b1b08cf2b9747c633a2372782848a27f
22+
validations:
23+
required: true
24+
- type: dropdown
25+
id: operating-system
26+
attributes:
27+
label: Which operating systems do you know to be affected?
28+
multiple: true
29+
options:
30+
- Linux
31+
- Mac
32+
- Windows
33+
- BSD
34+
- Other? (Please let us know in description)
35+
validations:
36+
required: true
37+
- type: dropdown
38+
id: backends
39+
attributes:
40+
label: GGML backends
41+
description: Which GGML backends do you know to be affected?
42+
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
43+
multiple: true
44+
- type: textarea
45+
id: steps_to_reproduce
46+
attributes:
47+
label: Steps to Reproduce
48+
description: >
49+
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
50+
If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us.
51+
placeholder: >
52+
Here are the exact commands that I used: ...
53+
validations:
54+
required: true
55+
- type: textarea
56+
id: first_bad_commit
57+
attributes:
58+
label: First Bad Commit
59+
description: >
60+
If the bug was not present on an earlier version: when did it start appearing?
61+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
62+
validations:
63+
required: false
64+
- type: textarea
65+
id: logs
66+
attributes:
67+
label: Relevant log output
68+
description: >
69+
Please copy and paste any relevant log output, including the command that you entered and any generated text.
70+
This will be automatically formatted into code, so no need for backticks.
71+
render: shell
72+
validations:
73+
required: true
Lines changed: 34 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,22 @@
1-
name: Low Severity Bugs
2-
description: Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
3-
title: "Bug: "
4-
labels: ["bug-unconfirmed", "low severity"]
1+
name: Bug (model evaluation)
2+
description: Something goes wrong when evaluating a model without any complex components such as the server on top.
3+
title: "Eval bug: "
4+
labels: ["bug-unconfirmed", "model evaluation"]
55
body:
66
- type: markdown
77
attributes:
88
value: >
99
Thanks for taking the time to fill out this bug report!
10-
Please include information about your system, the steps to reproduce the bug,
11-
and the version of llama.cpp that you are using.
12-
If you encountered the bug using a third-party frontend (e.g. ollama),
13-
please reproduce the bug using llama.cpp only.
10+
This issue template is intended for bug reports where the model evaluation results
11+
(i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation.
12+
If you encountered the issue while using an external UI (e.g. ollama),
13+
please reproduce your issue using one of the examples/binaries in this repository.
1414
The `llama-cli` binary can be used for simple and reproducible model inference.
15-
- type: textarea
16-
id: what-happened
17-
attributes:
18-
label: What happened?
19-
description: >
20-
Please give us a summary of what happened.
21-
If the problem is not obvious: what did you expect to happen?
22-
placeholder: Tell us what you see!
23-
validations:
24-
required: true
25-
- type: textarea
26-
id: hardware
27-
attributes:
28-
label: Hardware
29-
description: Which CPUs/GPUs and which GGML backends are you using?
30-
placeholder: >
31-
e.g. Ryzen 5950X + RTX 4090 (CUDA)
32-
validations:
33-
required: true
3415
- type: textarea
3516
id: version
3617
attributes:
3718
label: Name and Version
38-
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
19+
description: Which version of our software are you running? (use `--version` to get a version string)
3920
placeholder: |
4021
$./llama-cli --version
4122
version: 2999 (42b4109e)
@@ -45,7 +26,7 @@ body:
4526
- type: dropdown
4627
id: operating-system
4728
attributes:
48-
label: What operating system are you seeing the problem on?
29+
label: Which operating systems do you know to be affected?
4930
multiple: true
5031
options:
5132
- Linux
@@ -54,13 +35,29 @@ body:
5435
- BSD
5536
- Other? (Please let us know in description)
5637
validations:
57-
required: false
38+
required: true
39+
- type: dropdown
40+
id: backends
41+
attributes:
42+
label: GGML backends
43+
description: Which GGML backends do you know to be affected?
44+
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
45+
multiple: true
46+
- type: textarea
47+
id: hardware
48+
attributes:
49+
label: Hardware
50+
description: Which CPUs/GPUs are you using?
51+
placeholder: >
52+
e.g. Ryzen 5950X + 2x RTX 4090
53+
validations:
54+
required: true
5855
- type: textarea
5956
id: model
6057
attributes:
6158
label: Model
6259
description: >
63-
If applicable: which model at which quantization were you using when encountering the bug?
60+
Which model at which quantization were you using when encountering the bug?
6461
If you downloaded a GGUF file off of Huggingface, please provide a link.
6562
placeholder: >
6663
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
@@ -71,7 +68,7 @@ body:
7168
attributes:
7269
label: Steps to Reproduce
7370
description: >
74-
Please tell us how to reproduce the bug.
71+
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
7572
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
7673
that information would be very much appreciated by us.
7774
placeholder: >
@@ -93,5 +90,9 @@ body:
9390
id: logs
9491
attributes:
9592
label: Relevant log output
96-
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
93+
description: >
94+
Please copy and paste any relevant log output, including the command that you entered and any generated text.
95+
This will be automatically formatted into code, so no need for backticks.
9796
render: shell
97+
validations:
98+
required: true
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
name: Bug (misc.)
2+
description: Something is not working the way it should (and it's not covered by any of the above cases).
3+
title: "Misc. bug: "
4+
labels: ["bug-unconfirmed"]
5+
body:
6+
- type: markdown
7+
attributes:
8+
value: >
9+
Thanks for taking the time to fill out this bug report!
10+
This issue template is intended for miscellaneous bugs that don't fit into any other category.
11+
If you encountered the issue while using an external UI (e.g. ollama),
12+
please reproduce your issue using one of the examples/binaries in this repository.
13+
- type: textarea
14+
id: version
15+
attributes:
16+
label: Name and Version
17+
description: Which version of our software are you running? (use `--version` to get a version string)
18+
placeholder: |
19+
$./llama-cli --version
20+
version: 2999 (42b4109e)
21+
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
22+
validations:
23+
required: true
24+
- type: dropdown
25+
id: operating-system
26+
attributes:
27+
label: Which operating systems do you know to be affected?
28+
multiple: true
29+
options:
30+
- Linux
31+
- Mac
32+
- Windows
33+
- BSD
34+
- Other? (Please let us know in description)
35+
validations:
36+
required: true
37+
- type: textarea
38+
id: steps_to_reproduce
39+
attributes:
40+
label: Steps to Reproduce
41+
description: >
42+
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
43+
validations:
44+
required: true
45+
- type: textarea
46+
id: first_bad_commit
47+
attributes:
48+
label: First Bad Commit
49+
description: >
50+
If the bug was not present on an earlier version: when did it start appearing?
51+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
52+
validations:
53+
required: false
54+
- type: textarea
55+
id: logs
56+
attributes:
57+
label: Relevant log output
58+
description: >
59+
Please copy and paste any relevant log output, including the command that you entered and any generated text.
60+
This will be automatically formatted into code, so no need for backticks.
61+
render: shell
62+
validations:
63+
required: true

.github/ISSUE_TEMPLATE/02-bug-medium.yml

Lines changed: 0 additions & 97 deletions
This file was deleted.

.github/ISSUE_TEMPLATE/05-enhancement.yml renamed to .github/ISSUE_TEMPLATE/020-enhancement.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
name: Enhancement
2-
description: Used to request enhancements for llama.cpp
2+
description: Used to request enhancements for llama.cpp.
33
title: "Feature Request: "
44
labels: ["enhancement"]
55
body:

0 commit comments

Comments
 (0)