Skip to content

Commit 5e64e24

Browse files
GitHub: ask for more info in issue templates
1 parent 8fd4b7f commit 5e64e24

File tree

4 files changed

+200
-12
lines changed

4 files changed

+200
-12
lines changed

.github/ISSUE_TEMPLATE/01-bug-low.yml

Lines changed: 50 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "low severity"]
55
body:
66
- type: markdown
77
attributes:
8-
value: |
8+
value: >
99
Thanks for taking the time to fill out this bug report!
1010
Please include information about your system, the steps to reproduce the bug,
1111
and the version of llama.cpp that you are using.
12-
If possible, please provide a minimal code example that reproduces the bug.
12+
If you encountered the bug using a third-party frontend (e.g. ollama),
13+
please reproduce the bug using llama.cpp only.
14+
The `llama-cli` binary can be used for simple and reproducible model inference.
1315
- type: textarea
1416
id: what-happened
1517
attributes:
1618
label: What happened?
17-
description: Also tell us, what did you expect to happen?
19+
description: >
20+
Please give us a summary of what happened.
21+
If the problem is not obvious: what did you expect to happen?
1822
placeholder: Tell us what you see!
1923
validations:
2024
required: true
25+
- type: textarea
26+
id: hardware
27+
attributes:
28+
label: Hardware
29+
description: Which CPUs/GPUs and which GGML backends are you using?
30+
placeholder: >
31+
e.g. Ryzen 5950X + RTX 4090 (CUDA)
32+
validations:
33+
required: true
2134
- type: textarea
2235
id: version
2336
attributes:
@@ -42,6 +55,40 @@ body:
4255
- Other? (Please let us know in description)
4356
validations:
4457
required: false
58+
- type: textarea
59+
id: model
60+
attributes:
61+
label: Model
62+
description: >
63+
If applicable: which model at which quantization were you using when encountering the bug?
64+
If you downloaded a GGUF file off of Huggingface, please provide a link.
65+
placeholder: >
66+
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
67+
validations:
68+
required: false
69+
- type: textarea
70+
id: steps_to_reproduce
71+
attributes:
72+
label: Steps to Reproduce
73+
description: >
74+
Please tell us how to reproduce the bug.
75+
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
76+
that information would be very much appreciated by us.
77+
placeholder: >
78+
e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
79+
When I use -ngl 0 it works correctly.
80+
Here are the exact commands that I used: ...
81+
validations:
82+
required: true
83+
- type: textarea
84+
id: first_bad_commit
85+
attributes:
86+
label: First Bad Commit
87+
description: >
88+
If the bug was not present on an earlier version: when did it start appearing?
89+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
90+
validations:
91+
required: false
4592
- type: textarea
4693
id: logs
4794
attributes:

.github/ISSUE_TEMPLATE/02-bug-medium.yml

Lines changed: 50 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "medium severity"]
55
body:
66
- type: markdown
77
attributes:
8-
value: |
8+
value: >
99
Thanks for taking the time to fill out this bug report!
1010
Please include information about your system, the steps to reproduce the bug,
1111
and the version of llama.cpp that you are using.
12-
If possible, please provide a minimal code example that reproduces the bug.
12+
If you encountered the bug using a third-party frontend (e.g. ollama),
13+
please reproduce the bug using llama.cpp only.
14+
The `llama-cli` binary can be used for simple and reproducible model inference.
1315
- type: textarea
1416
id: what-happened
1517
attributes:
1618
label: What happened?
17-
description: Also tell us, what did you expect to happen?
19+
description: >
20+
Please give us a summary of what happened.
21+
If the problem is not obvious: what did you expect to happen?
1822
placeholder: Tell us what you see!
1923
validations:
2024
required: true
25+
- type: textarea
26+
id: hardware
27+
attributes:
28+
label: Hardware
29+
description: Which CPUs/GPUs and which GGML backends are you using?
30+
placeholder: >
31+
e.g. Ryzen 5950X + RTX 4090 (CUDA)
32+
validations:
33+
required: true
2134
- type: textarea
2235
id: version
2336
attributes:
@@ -42,6 +55,40 @@ body:
4255
- Other? (Please let us know in description)
4356
validations:
4457
required: false
58+
- type: textarea
59+
id: model
60+
attributes:
61+
label: Model
62+
description: >
63+
If applicable: which model at which quantization were you using when encountering the bug?
64+
If you downloaded a GGUF file off of Huggingface, please provide a link.
65+
placeholder: >
66+
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
67+
validations:
68+
required: false
69+
- type: textarea
70+
id: steps_to_reproduce
71+
attributes:
72+
label: Steps to Reproduce
73+
description: >
74+
Please tell us how to reproduce the bug.
75+
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
76+
that information would be very much appreciated by us.
77+
placeholder: >
78+
e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
79+
When I use -ngl 0 it works correctly.
80+
Here are the exact commands that I used: ...
81+
validations:
82+
required: true
83+
- type: textarea
84+
id: first_bad_commit
85+
attributes:
86+
label: First Bad Commit
87+
description: >
88+
If the bug was not present on an earlier version: when did it start appearing?
89+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
90+
validations:
91+
required: false
4592
- type: textarea
4693
id: logs
4794
attributes:

.github/ISSUE_TEMPLATE/03-bug-high.yml

Lines changed: 50 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "high severity"]
55
body:
66
- type: markdown
77
attributes:
8-
value: |
8+
value: >
99
Thanks for taking the time to fill out this bug report!
1010
Please include information about your system, the steps to reproduce the bug,
1111
and the version of llama.cpp that you are using.
12-
If possible, please provide a minimal code example that reproduces the bug.
12+
If you encountered the bug using a third-party frontend (e.g. ollama),
13+
please reproduce the bug using llama.cpp only.
14+
The `llama-cli` binary can be used for simple and reproducible model inference.
1315
- type: textarea
1416
id: what-happened
1517
attributes:
1618
label: What happened?
17-
description: Also tell us, what did you expect to happen?
19+
description: >
20+
Please give us a summary of what happened.
21+
If the problem is not obvious: what did you expect to happen?
1822
placeholder: Tell us what you see!
1923
validations:
2024
required: true
25+
- type: textarea
26+
id: hardware
27+
attributes:
28+
label: Hardware
29+
description: Which CPUs/GPUs and which GGML backends are you using?
30+
placeholder: >
31+
e.g. Ryzen 5950X + RTX 4090 (CUDA)
32+
validations:
33+
required: true
2134
- type: textarea
2235
id: version
2336
attributes:
@@ -42,6 +55,40 @@ body:
4255
- Other? (Please let us know in description)
4356
validations:
4457
required: false
58+
- type: textarea
59+
id: model
60+
attributes:
61+
label: Model
62+
description: >
63+
If applicable: which model at which quantization were you using when encountering the bug?
64+
If you downloaded a GGUF file off of Huggingface, please provide a link.
65+
placeholder: >
66+
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
67+
validations:
68+
required: false
69+
- type: textarea
70+
id: steps_to_reproduce
71+
attributes:
72+
label: Steps to Reproduce
73+
description: >
74+
Please tell us how to reproduce the bug.
75+
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
76+
that information would be very much appreciated by us.
77+
placeholder: >
78+
e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
79+
When I use -ngl 0 it works correctly.
80+
Here are the exact commands that I used: ...
81+
validations:
82+
required: true
83+
- type: textarea
84+
id: first_bad_commit
85+
attributes:
86+
label: First Bad Commit
87+
description: >
88+
If the bug was not present on an earlier version: when did it start appearing?
89+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
90+
validations:
91+
required: false
4592
- type: textarea
4693
id: logs
4794
attributes:

.github/ISSUE_TEMPLATE/04-bug-critical.yml

Lines changed: 50 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,32 @@ labels: ["bug-unconfirmed", "critical severity"]
55
body:
66
- type: markdown
77
attributes:
8-
value: |
8+
value: >
99
Thanks for taking the time to fill out this bug report!
1010
Please include information about your system, the steps to reproduce the bug,
1111
and the version of llama.cpp that you are using.
12-
If possible, please provide a minimal code example that reproduces the bug.
12+
If you encountered the bug using a third-party frontend (e.g. ollama),
13+
please reproduce the bug using llama.cpp only.
14+
The `llama-cli` binary can be used for simple and reproducible model inference.
1315
- type: textarea
1416
id: what-happened
1517
attributes:
1618
label: What happened?
17-
description: Also tell us, what did you expect to happen?
19+
description: >
20+
Please give us a summary of what happened.
21+
If the problem is not obvious: what did you expect to happen?
1822
placeholder: Tell us what you see!
1923
validations:
2024
required: true
25+
- type: textarea
26+
id: hardware
27+
attributes:
28+
label: Hardware
29+
description: Which CPUs/GPUs and which GGML backends are you using?
30+
placeholder: >
31+
e.g. Ryzen 5950X + RTX 4090 (CUDA)
32+
validations:
33+
required: true
2134
- type: textarea
2235
id: version
2336
attributes:
@@ -42,6 +55,40 @@ body:
4255
- Other? (Please let us know in description)
4356
validations:
4457
required: false
58+
- type: textarea
59+
id: model
60+
attributes:
61+
label: Model
62+
description: >
63+
If applicable: which model at which quantization were you using when encountering the bug?
64+
If you downloaded a GGUF file off of Huggingface, please provide a link.
65+
placeholder: >
66+
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
67+
validations:
68+
required: false
69+
- type: textarea
70+
id: steps_to_reproduce
71+
attributes:
72+
label: Steps to Reproduce
73+
description: >
74+
Please tell us how to reproduce the bug.
75+
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
76+
that information would be very much appreciated by us.
77+
placeholder: >
78+
e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
79+
When I use -ngl 0 it works correctly.
80+
Here are the exact commands that I used: ...
81+
validations:
82+
required: true
83+
- type: textarea
84+
id: first_bad_commit
85+
attributes:
86+
label: First Bad Commit
87+
description: >
88+
If the bug was not present on an earlier version: when did it start appearing?
89+
If possible, please do a git bisect and identify the exact commit that introduced the bug.
90+
validations:
91+
required: false
4592
- type: textarea
4693
id: logs
4794
attributes:

0 commit comments

Comments
 (0)