Skip to content

Commit e45fd61

Browse files
dbortfacebook-github-bot
authored andcommitted
Move portable_programming.md into the new docs tree (#1080)
Summary: Move portable_programming.md into the new docs tree. Also update the one reference to it. Reviewed By: JacobSzwejbka Differential Revision: D50609267
1 parent dab0d71 commit e45fd61

File tree

3 files changed

+19
-15
lines changed

3 files changed

+19
-15
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,8 +54,7 @@ modifications to the Google C++ style guide.
5454

5555
### C++ Portability Guidelines
5656

57-
See also
58-
[Portable Programming](https://github.com/pytorch/executorch/blob/main/docs/website/docs/contributors/portable_programming.md)
57+
See also [Portable C++ Programming](/docs/source/portable-cpp-programming.md)
5958
for detailed advice.
6059

6160
#### C++ language version

docs/source/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,6 +147,7 @@ Topics in this section will help you get started with ExecuTorch.
147147
runtime-overview
148148
runtime-backend-delegate-implementation-and-linking
149149
runtime-platform-abstraction-layer
150+
portable-cpp-programming
150151

151152
.. toctree::
152153
:glob:

docs/website/docs/contributors/portable_programming.md renamed to docs/source/portable-cpp-programming.md

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
1-
# Portable Programming
1+
# Portable C++ Programming
22

3-
NOTE: This document covers the runtime code: i.e., the code that needs to build
4-
for and execute in target hardware environments. These rules do not necessarily
5-
apply to code that only runs on the development host, like authoring tools.
3+
NOTE: This document covers the code that needs to build for and execute in
4+
target hardware environments. This applies to the core execution runtime, as
5+
well as kernel and backend implementations in this repo. These rules do not
6+
necessarily apply to code that only runs on the development host, like authoring
7+
or build tools.
68

79
The ExecuTorch runtime code is intendend to be portable, and should build for a
810
wide variety of systems, from servers to mobile phones to DSPs, from POSIX to
@@ -26,12 +28,14 @@ allocation, the code may not use:
2628
- `malloc()`, `free()`
2729
- `new`, `delete`
2830
- Most `stdlibc++` types; especially container types that manage their own
29-
memory, like `string` and `vector`.
31+
memory like `string` and `vector`, or memory-management wrapper types like
32+
`unique_ptr` and `shared_ptr`.
3033

3134
And to help reduce complexity, the code may not depend on any external
3235
dependencies except:
33-
- `flatbuffers`
34-
- `caffe2/...` (only for ATen mode)
36+
- `flatbuffers` (for `.pte` file deserialization)
37+
- `flatcc` (for event trace serialization)
38+
- Core PyTorch (only for ATen mode)
3539

3640
## Platform Abstraction Layer (PAL)
3741

@@ -46,13 +50,13 @@ like:
4650
## Memory Allocation
4751

4852
Instead of using `malloc()` or `new`, the runtime code should allocate memory
49-
using the `MemoryManager` (`//executorch/runtime/executor/MemoryManager.h`) provided by
50-
the client.
53+
using the `MemoryManager` (`//executorch/runtime/executor/memory_manager.h`)
54+
provided by the client.
5155

5256
## File Loading
5357

54-
Instead of loading program files directly, clients should provide buffers with
55-
the data already loaded.
58+
Instead of loading files directly, clients should provide buffers with the data
59+
already loaded, or wrapped in types like `DataLoader`.
5660

5761
## Integer Types
5862

@@ -145,8 +149,8 @@ value to the lean mode type, like:
145149
ET_CHECK_MSG(
146150
input.dim() == output.dim(),
147151
"input.dim() %zd not equal to output.dim() %zd",
148-
ssize_t(input.dim()),
149-
ssize_t(output.dim()));
152+
(ssize_t)input.dim(),
153+
(ssize_t)output.dim());
150154
```
151155
In this case, `Tensor::dim()` returns `ssize_t` in lean mode, while
152156
`at::Tensor::dim()` returns `int64_t` in ATen mode. Since they both conceptually

0 commit comments

Comments
 (0)