-
Notifications
You must be signed in to change notification settings - Fork 606
Integrate Corestone300 in unit tests #4015
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The application now handles command line input passed on through semihosting if compiled with SEMIHOSTING defined. The command line is: executor_runner -m m.pte -i input1.bin -o output where -i can occur multiple times and output is the basename which will be suffixed with output tensor number and '.bin'. Signed-off-by: Per Åstrand <[email protected]> Change-Id: I718029f6147fbf39e884d3ae487a0855c5439259
The ouput is program.pte in the folder set in the compile spec by dump_intermediate_artifacts_to(). I renamed 'debug_tosa_path' to 'debug_artifact_path' to reflect that many different intermediate artifacts can be dumped to the same path. I debated whether to do the dump as a test stage or in backend.preprocess() since the tosa is currently dumped in postprocess. In the end I went with a stage since Executorch already had a Serialize stage and since there is a comment that mentions that dumping the tosa in preprocess is not ideal. Change-Id: I52f1926635013093ed6282c53e3bbc15be5e40b1 Signed-off-by: Erik Lundell <[email protected]>
The main idea is to view the run_and... function as always comparing a output stage to a reference stage. In practice, the reference is currently the pytorch model or the quantized model and the output stage is tosa or .pte. However, as long as the run_artifact is implemented and outputs consistent output, any stage should be comparable to any other. To do this, I had to introduce a stage for the inital model. I also had to override some Stage classes. This had the added benefit of moving run logic into ToExecutorch's run_artifact which was mentioned in an earlier comment. I made a small change in the Quantize stage baseclass, if we don't want that, It could also be overriden. Change-Id: Ie64f8237bfd10abbb724f04fe006758dfff0044f Signed-off-by: Erik Lundell <[email protected]>
- setup_testing.sh script to build an executor_runner in semihosting mode in /test/res - refactor tosa_test_utils to runner_utils that can handle both running tosa_reference and corstone - Use refactored runner_utils in arm_tester - Add pytest hooks to common.py for enabling/ disabling corstone testing - Enable corstone testing in gh trunk job - Squash some bugs in executor_runner - Move Corstone heap back to DTCM Usage is shown in test_add. Change-Id: Id560a6bc90987542ea777066d50cbd5bd4065028 Signed-off-by: Erik Lundell <[email protected]>
Signed-off-by: Erik Lundell <[email protected]> Change-Id: I7859514463feae3d22cf283afa9517e60ca1365d
ET_LOG(Fatal, "Not right number of parameters!"); | ||
ET_LOG( | ||
Fatal, | ||
"app -m model.pte -i input.bin [-i input2.bin] -o output_basename"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
backends/arm/test/ops/test_conv.py
Outdated
@@ -296,7 +299,6 @@ def _test_conv2d_u55_BI_pipeline( | |||
.check_count({"torch.ops.higher_order.executorch_call_delegate": 1}) | |||
.check_not(["executorch_exir_dialects_edge__ops_aten_convolution_default"]) | |||
.to_executorch() | |||
.serialize() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this doesn't add much value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This stage produces the final .pte which I think is valuable. It also follows the pattern where there is a stage for each artifact + a run_stage method. For the serialize stage, the run_stage launches the FVP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant, you are dropping serialize call for this where you are not actually running it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right, yeah it's a "make sure it doesn't crash" test for now. Actually running is the next step!
"-C", | ||
"cpu0.CFGITCMSZ=11", | ||
"-C", | ||
"cpu0.semihosting-enable=1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want to do this as opposed to loading on a "real" TCM or something? Curious from test coverage point of view. I understand if there are issues with address space etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to the increased ITCM size? This was somewhat of a quick fix to fit the runner in the FVP in a simple way. But It would be nice to match the Himax or similar in the future though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am referring to using SemiHosting for testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, the reason for that is to avoid building a new runner for every test
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai merged this pull request in 649d7b1. |
setup_testing.sh needs to be run to build an executor_runner for tests.
To enable this, pytest needs to be run with the following flags:
-p executorch.backends.arm.test.common
--arm_quantize_io
--arm_run_corstone300
This is not yet enabled in CI.