Skip to content

Fix Windows build #10946

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 19, 2025
Merged

Fix Windows build #10946

merged 1 commit into from
May 19, 2025

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented May 16, 2025

Summary:

Context

Fix third party CMakeLists.txt to allow flatcc to build for Windows. Some CMake configuration settings need to be adjusted for windows platforms.

Test Plan:

Test Plan

python install_executorch.py

Copy link

pytorch-bot bot commented May 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10946

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 25a7a8b with merge base 3032398 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 16, 2025
@SS-JIA SS-JIA added the release notes: build Changes related to build, including dependency upgrades, build flags, optimizations, etc. label May 16, 2025
@zingo
Copy link
Collaborator

zingo commented May 16, 2025

sorry about the fail in
pull / unittest-arm-backend-with-no-fvp (test_pytest_models) / linux-job
Possible fix is ongoing here
#10950
Please ignore this error it is unrelated to your PR

@@ -113,7 +113,7 @@ if(NOT CMAKE_GENERATOR STREQUAL "Xcode"
quantized_pybind_kernels_lib DEPS portable_lib
)
target_link_libraries(
quantized_ops_aot_lib PUBLIC quantized_ops_pybind_lib
quantized_ops_aot_lib PUBLIC quantized_ops_pybind_lib quantized_ops_lib
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needed to prevent linker errors when linking the aot lib.

Copy link
Contributor

@jathu jathu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but lets make sure CI is happy

Comment on lines 118 to 120
if (_WIN32)
target_link_libraries(quantized_ops_aot_lib PUBLIC quantized_ops_lib)
endif()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't look right. Why is the dependency different for Windows?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't actually know... if I don't do this, then on Windows there are linker errors where the AOT lib cannot find the symbols for the actual operator implementations in quantized_ops_lib.

However, if I do this across the board, then I run into duplicate registration issues on non-Windows platforms. Could be platform dependent linker behaviour, I suppose?

Copy link
Contributor Author

@SS-JIA SS-JIA May 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also tested running a model on windows to make sure that the duplicate registration did not occur on Windows.

Summary:
## Context

Fix third party `CMakeLists.txt` to allow `flatcc` to build for Windows. Some CMake configuration settings need to be adjusted for windows platforms.

Test Plan:
## Test Plan

```
python install_executorch.py
```
@SS-JIA
Copy link
Contributor Author

SS-JIA commented May 19, 2025

Failures don't seem related. In both cases the build succeeds (which this PR would impact) but failures occur during specific tests.

Build Linux Wheels got cancelled because

"Canceling since a higher priority waiting request for Build Linux Wheels-10946-wheel-linux-pytorch/test-infra-main exists"

test-arm-backend failure is because CPU time was exceeded.

--------------------------------------------------------------------------------
Running /pytorch/executorch/arm_test/test_model/mv2_arm_delegate_ethos-u85-256/mv2_arm_delegate_ethos-u85-256/cmake-out/arm_executor_runner for ethos-u85-256 run with FVP:FVP_Corstone_SSE-320 num_macs:256 timeout:600
WARNING: Corstone FVP is not cycle accurate and should NOT be used to determine valid runtime
--------------------------------------------------------------------------------
telnetterminal0: Listening for serial connection on port 5000
telnetterminal1: Listening for serial connection on port 5001
telnetterminal2: Listening for serial connection on port 5002
telnetterminal5: Listening for serial connection on port 5003
I [executorch:arm_executor_runner.cpp:491] PTE in 0x78000000 , Size: 3541184 bytes

I [executorch:arm_executor_runner.cpp:516] PTE Model data loaded. Size: 2934768 bytes.

I [executorch:arm_executor_runner.cpp:529] Model buffer loaded, has 1 methods

I [executorch:arm_executor_runner.cpp:537] Running method forward

I [executorch:arm_executor_runner.cpp:548] Setup Method allocator pool. Size: 62914560 bytes.

Traceback (most recent call last):
  File "/home/ec2-user/actions-runner/_work/executorch/executorch/test-infra/.github/scripts/run_with_env_secrets.py", line 102, in <module>
I [executorch:arm_executor_runner.cpp:565] Setting up planned buffer 0, size 752640.

I [executorch:arm_executor_runner.cpp:595] Setting up ETDump

I [executorch:EthosUBackend.cpp:114] EthosUBackend::init 0x780000b0

I [executorch:arm_executor_runner.cpp:612] Method 'forward' loaded.

I [executorch:arm_executor_runner.cpp:614] Preparing inputs...

I [executorch:arm_executor_runner.cpp:621] Input testset[0] from bundled bpte

I [executorch:arm_executor_runner.cpp:682] Input prepared.

I [executorch:arm_executor_runner.cpp:684] Starting the model execution...


Info: Simulation is stopping. Reason: CPU time has been exceeded.

test-llama-runner-mac failure is because the LLM output doesn't look right.

PyTorchObserver {"prompt_tokens":1,"generated_tokens":8,"model_load_start_ms":1747689319901,"model_load_end_ms":1747689324733,"inference_start_ms":1747689324843,"inference_end_ms":1747689324936,"prompt_eval_end_ms":1747689324854,"first_token_ms":1747689324854,"aggregate_sampling_time_ms":0,"SCALING_FACTOR_UNITS_PER_SECOND":1000}'
+ EXPECTED_PREFIX='Once upon a time,'
+ [[ Once upon the end, she felt really happy and
PyTorchObserver {"prompt_tokens":1,"generated_tokens":8,"model_load_start_ms":1747689319901,"model_load_end_ms":1747689324733,"inference_start_ms":1747689324843,"inference_end_ms":1747689324936,"prompt_eval_end_ms":1747689324854,"first_token_ms":1747689324854,"aggregate_sampling_time_ms":0,"SCALING_FACTOR_UNITS_PER_SECOND":1000} == \O\n\c\e\ \u\p\o\n\ \a\ \t\i\m\e\,* ]]
+ echo 'Expected result prefix: Once upon a time,'
Expected result prefix: Once upon a time,
+ echo 'Actual result: Once upon the end, she felt really happy and
PyTorchObserver {"prompt_tokens":1,"generated_tokens":8,"model_load_start_ms":1747689319901,"model_load_end_ms":1747689324733,"inference_start_ms":1747689324843,"inference_end_ms":1747689324936,"prompt_eval_end_ms":1747689324854,"first_token_ms":1747689324854,"aggregate_sampling_time_ms":0,"SCALING_FACTOR_UNITS_PER_SECOND":1000}'
Actual result: Once upon the end, she felt really happy and
PyTorchObserver {"prompt_tokens":1,"generated_tokens":8,"model_load_start_ms":1747689319901,"model_load_end_ms":1747689324733,"inference_start_ms":1747689324843,"inference_end_ms":1747689324936,"prompt_eval_end_ms":1747689324854,"first_token_ms":1747689324854,"aggregate_sampling_time_ms":0,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
+ echo 'Failure; results not the same'
Failure; results not the same

@SS-JIA SS-JIA merged commit cb3eba0 into main May 19, 2025
209 of 295 checks passed
@SS-JIA SS-JIA deleted the pr10946 branch May 19, 2025 21:57
@zingo
Copy link
Collaborator

zingo commented May 19, 2025

I just merged a PR that I expect/hope fixes. Feel free to ignore the Arm backend fails. If you rebase I expect it to go away.

This:
#10973

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/binaries ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: build Changes related to build, including dependency upgrades, build flags, optimizations, etc.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants