Skip to content

Commit 27d5358

Browse files
authored
docs: Update readme examples to use newer Qwen2 model (#1544)
1 parent 5beec1a commit 27d5358

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -327,7 +327,7 @@ You'll need to install the `huggingface-hub` package to use this feature (`pip i
327327

328328
```python
329329
llm = Llama.from_pretrained(
330-
repo_id="Qwen/Qwen1.5-0.5B-Chat-GGUF",
330+
repo_id="Qwen/Qwen2-0.5B-Instruct-GGUF",
331331
filename="*q8_0.gguf",
332332
verbose=False
333333
)
@@ -688,7 +688,7 @@ For possible options, see [llama_cpp/llama_chat_format.py](llama_cpp/llama_chat_
688688
If you have `huggingface-hub` installed, you can also use the `--hf_model_repo_id` flag to load a model from the Hugging Face Hub.
689689

690690
```bash
691-
python3 -m llama_cpp.server --hf_model_repo_id Qwen/Qwen1.5-0.5B-Chat-GGUF --model '*q8_0.gguf'
691+
python3 -m llama_cpp.server --hf_model_repo_id Qwen/Qwen2-0.5B-Instruct-GGUF --model '*q8_0.gguf'
692692
```
693693

694694
### Web Server Features

0 commit comments

Comments
 (0)