Skip to content

Commit 6186313

Browse files
larryliu0820facebook-github-bot
authored andcommitted
Fix llama2 README.md cmake instructions (#3096)
Summary: As titled. The current instruction runs into issue due to our way of arranging `pthreadpool` and `cpuinfo` in CMake. Will need a bigger effort to clean them up. For now let's update the instruction to be able to run it. Differential Revision: D56251563
1 parent bae0387 commit 6186313

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

examples/models/llama2/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,7 @@ The Uncyclotext results generated above used: `{max_seq_len: 2048, limit: 1000}`
146146
-DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \
147147
-DEXECUTORCH_BUILD_XNNPACK=ON \
148148
-DEXECUTORCH_BUILD_OPTIMIZED=ON \
149+
-DEXECUTORCH_BUILD_CUSTOM=ON \
149150
-Bcmake-out .
150151
151152
cmake --build cmake-out -j16 --target install --config Release
@@ -156,7 +157,9 @@ The Uncyclotext results generated above used: `{max_seq_len: 2048, limit: 1000}`
156157
cmake -DPYTHON_EXECUTABLE=python \
157158
-DCMAKE_INSTALL_PREFIX=cmake-out \
158159
-DCMAKE_BUILD_TYPE=Release \
160+
-DEXECUTORCH_BUILD_CUSTOM=ON \
159161
-DEXECUTORCH_BUILD_OPTIMIZED=ON \
162+
-DEXECUTORCH_BUILD_XNNPACK=ON \
160163
-Bcmake-out/examples/models/llama2 \
161164
examples/models/llama2
162165

0 commit comments

Comments
 (0)