Skip to content

Move portable_programming.md into the new docs tree #1080

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,7 @@ modifications to the Google C++ style guide.

### C++ Portability Guidelines

See also
[Portable Programming](https://github.com/pytorch/executorch/blob/main/docs/website/docs/contributors/portable_programming.md)
See also [Portable C++ Programming](/docs/source/portable-cpp-programming.md)
for detailed advice.

#### C++ language version
Expand Down
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,7 @@ Topics in this section will help you get started with ExecuTorch.
runtime-overview
runtime-backend-delegate-implementation-and-linking
runtime-platform-abstraction-layer
portable-cpp-programming

.. toctree::
:glob:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# Portable Programming
# Portable C++ Programming

NOTE: This document covers the runtime code: i.e., the code that needs to build
for and execute in target hardware environments. These rules do not necessarily
apply to code that only runs on the development host, like authoring tools.
NOTE: This document covers the code that needs to build for and execute in
target hardware environments. This applies to the core execution runtime, as
well as kernel and backend implementations in this repo. These rules do not
necessarily apply to code that only runs on the development host, like authoring
or build tools.

The ExecuTorch runtime code is intendend to be portable, and should build for a
wide variety of systems, from servers to mobile phones to DSPs, from POSIX to
Expand All @@ -26,12 +28,14 @@ allocation, the code may not use:
- `malloc()`, `free()`
- `new`, `delete`
- Most `stdlibc++` types; especially container types that manage their own
memory, like `string` and `vector`.
memory like `string` and `vector`, or memory-management wrapper types like
`unique_ptr` and `shared_ptr`.

And to help reduce complexity, the code may not depend on any external
dependencies except:
- `flatbuffers`
- `caffe2/...` (only for ATen mode)
- `flatbuffers` (for `.pte` file deserialization)
- `flatcc` (for event trace serialization)
- Core PyTorch (only for ATen mode)

## Platform Abstraction Layer (PAL)

Expand All @@ -46,13 +50,13 @@ like:
## Memory Allocation

Instead of using `malloc()` or `new`, the runtime code should allocate memory
using the `MemoryManager` (`//executorch/runtime/executor/MemoryManager.h`) provided by
the client.
using the `MemoryManager` (`//executorch/runtime/executor/memory_manager.h`)
provided by the client.

## File Loading

Instead of loading program files directly, clients should provide buffers with
the data already loaded.
Instead of loading files directly, clients should provide buffers with the data
already loaded, or wrapped in types like `DataLoader`.

## Integer Types

Expand Down Expand Up @@ -145,8 +149,8 @@ value to the lean mode type, like:
ET_CHECK_MSG(
input.dim() == output.dim(),
"input.dim() %zd not equal to output.dim() %zd",
ssize_t(input.dim()),
ssize_t(output.dim()));
(ssize_t)input.dim(),
(ssize_t)output.dim());
```
In this case, `Tensor::dim()` returns `ssize_t` in lean mode, while
`at::Tensor::dim()` returns `int64_t` in ATen mode. Since they both conceptually
Expand Down