1
- # Portable Programming
1
+ # Portable C++ Programming
2
2
3
- NOTE: This document covers the runtime code: i.e., the code that needs to build
4
- for and execute in target hardware environments. These rules do not necessarily
5
- apply to code that only runs on the development host, like authoring tools.
3
+ NOTE: This document covers the code that needs to build for and execute in
4
+ target hardware environments. This applies to the core execution runtime, as
5
+ well as kernel and backend implementations in this repo. These rules do not
6
+ necessarily apply to code that only runs on the development host, like authoring
7
+ or build tools.
6
8
7
9
The ExecuTorch runtime code is intendend to be portable, and should build for a
8
10
wide variety of systems, from servers to mobile phones to DSPs, from POSIX to
@@ -26,12 +28,14 @@ allocation, the code may not use:
26
28
- ` malloc() ` , ` free() `
27
29
- ` new ` , ` delete `
28
30
- Most ` stdlibc++ ` types; especially container types that manage their own
29
- memory, like ` string ` and ` vector ` .
31
+ memory like ` string ` and ` vector ` , or memory-management wrapper types like
32
+ ` unique_ptr ` and ` shared_ptr ` .
30
33
31
34
And to help reduce complexity, the code may not depend on any external
32
35
dependencies except:
33
- - ` flatbuffers `
34
- - ` caffe2/... ` (only for ATen mode)
36
+ - ` flatbuffers ` (for ` .pte ` file deserialization)
37
+ - ` flatcc ` (for event trace serialization)
38
+ - Core PyTorch (only for ATen mode)
35
39
36
40
## Platform Abstraction Layer (PAL)
37
41
@@ -46,13 +50,13 @@ like:
46
50
## Memory Allocation
47
51
48
52
Instead of using ` malloc() ` or ` new ` , the runtime code should allocate memory
49
- using the ` MemoryManager ` (` //executorch/runtime/executor/MemoryManager .h ` ) provided by
50
- the client.
53
+ using the ` MemoryManager ` (` //executorch/runtime/executor/memory_manager .h ` )
54
+ provided by the client.
51
55
52
56
## File Loading
53
57
54
- Instead of loading program files directly, clients should provide buffers with
55
- the data already loaded.
58
+ Instead of loading files directly, clients should provide buffers with the data
59
+ already loaded, or wrapped in types like ` DataLoader ` .
56
60
57
61
## Integer Types
58
62
@@ -145,8 +149,8 @@ value to the lean mode type, like:
145
149
ET_CHECK_MSG(
146
150
input.dim() == output.dim(),
147
151
"input.dim() %zd not equal to output.dim() %zd",
148
- ssize_t( input.dim() ),
149
- ssize_t( output.dim() ));
152
+ (ssize_t) input.dim(),
153
+ (ssize_t) output.dim());
150
154
```
151
155
In this case, ` Tensor::dim() ` returns ` ssize_t ` in lean mode, while
152
156
` at::Tensor::dim() ` returns ` int64_t ` in ATen mode. Since they both conceptually
0 commit comments