Skip to content

Commit cd46721

Browse files
kirklandsignfacebook-github-bot
authored andcommitted
Linter fix (#5643)
Summary: Pull Request resolved: #5643 Reviewed By: larryliu0820 Differential Revision: D63401959 Pulled By: kirklandsign fbshipit-source-id: 724f342b75bc1de7b5347315ac9fc000b169ad17
1 parent 6e9efa1 commit cd46721

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ For more detail using Llama 3.2 lightweight models including prompt template, pl
7878
### For Llama Guard 1B models
7979
To safeguard your application, you can use our Llama Guard models for prompt classification or response classification as mentioned [here](https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-3/).
8080
* Llama Guard 3-1B is a fine-tuned Llama-3.2-1B pretrained model for content safety classification. It is aligned to safeguard against the [MLCommons standardized hazards taxonomy](https://arxiv.org/abs/2404.12241).
81-
* You can download the latest Llama Guard 1B INT4 model, which is already exported for Executorch, using instructions from [here](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3). This model is pruned and quantized to 4-bit weights using 8da4w mode and reduced the size to <450MB to optimize deployment on edge devices.
81+
* You can download the latest Llama Guard 1B INT4 model, which is already exported for ExecuTorch, using instructions from [here](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3). This model is pruned and quantized to 4-bit weights using 8da4w mode and reduced the size to <450MB to optimize deployment on edge devices.
8282
* You can use the same tokenizer from Llama 3.2.
8383
* To try this model, choose Model Type as LLAMA_GUARD_3 in the demo app below and try prompt classification for a given user prompt.
8484
* We prepared this model using the following command

0 commit comments

Comments
 (0)