-
Notifications
You must be signed in to change notification settings - Fork 608
create et_view primop #2553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create et_view primop #2553
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2553
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit f41bb58 with merge base d612c23 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D55099757 |
This pull request was exported from Phabricator. Differential Revision: D55099757 |
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
756fa77
to
22211ca
Compare
This pull request was exported from Phabricator. Differential Revision: D55099757 |
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
22211ca
to
307d914
Compare
This pull request was exported from Phabricator. Differential Revision: D55099757 |
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
307d914
to
9606054
Compare
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
This pull request was exported from Phabricator. Differential Revision: D55099757 |
9606054
to
89fad44
Compare
This pull request was exported from Phabricator. Differential Revision: D55099757 |
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
89fad44
to
1270f31
Compare
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
1270f31
to
70dceb3
Compare
This pull request was exported from Phabricator. Differential Revision: D55099757 |
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
70dceb3
to
e4cba39
Compare
This pull request was exported from Phabricator. Differential Revision: D55099757 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D55099757 |
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
e4cba39
to
c5d2c40
Compare
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820 Differential Revision: D55099757
c5d2c40
to
7c53597
Compare
This pull request was exported from Phabricator. Differential Revision: D55099757 |
Summary: Design: https://docs.google.com/document/d/1l9x925EOrE8mHFJdRCC59nBJXyqBdnoeK-EgNQScXD0/edit#heading=h.kocb2mvchnib When remove_static_view_copy is turned off (state today), the pass flow in to_executorch is: 1. config.to_out_var_pass 2. config.memory_planning_pass When remove_static_view_copy is turned on, the pass flow in to_executorch becomes: 1. NormalizeViewCopyBasePass() 2. ReplaceStaticViewCopyWithMemoryViewPass() (introduces executorch.exir.memory.view) 3. config.to_out_var_pass (skips executorch.exir.memory.view) 4. config.memory_planning_pass 5. ReplaceMemoryViewWithAllocPass() (removes executorch.exir.memory.view) The basic idea is to replace view_copy with a new operator executorch.exir.memory.view before memory planning (ReplaceStaticViewCopyWithMemoryViewPass). These nodes share the same spec as their base so that lifetimes are updated appropriately during memory planning. After memory planning, these nodes are converted to executorch.exir.memory.alloc nodes before emission. They are not converted to alloc nodes before memory planning. Before memory planning, memory.view nodes share the same spec as their base, but after memory planning, they get new specs when they're converted to memory.alloc (pointing to the same storage as base). Differential Revision: https://internalfb.com/D54816555
Summary: Design: https://docs.google.com/document/d/1l9x925EOrE8mHFJdRCC59nBJXyqBdnoeK-EgNQScXD0/edit#heading=h.kocb2mvchnib This stack replaces view_copy nodes with memory.view nodes. In the first diff (D54816555), I write a pass to normalize view_copy nodes by making their base point to the upstream non-view node. This means if we have something like op -> view_copy1 -> view_copy2, then after normalization, both view copies will point to op in their base (assuming op is not a view node). Note that this pass combined with dead-code elimination removes redundant view copies. This is because a redundant view copy will have no users have this pass. In the second diff (D54827438), I write a pass to convert view_copy nodes to memory.view nodes. A memory.view is similar to torch.ops.aten.view.default, but it is its own function so that we can handle it specially during memory planning and emission. A memory.view node has a special TensorSpec of type _MemoryViewSpec. This spec is immutable and dynamically looks up non-size related fields from its base's TensorSpec. Because it is immutable, fields on a _MemoryViewSpec cannot be set, but if a field is updated on the base spec, this update is reflected in the memory.view node's _MemoryViewSpec. Not all view_copy nodes are converted to memory.view nodes. Only static nodes that are memory planned are converted. Not all static nodes are memory planned in ExecuTorch. For example, there is an option to turn off memory planning for input nodes, and outputs from some higher order ops like cond are not memory planned. Which nodes are memory planned is not easily available, and I did not try to cover all cases of nodes that can be converted. We can expand this list over time. In the third diff (D54827438), I implement the actual view_copy elimination. In the ExecutorchBackendConfig, there is a new option remove_static_view_copy. If remove_static_view_copy = True, the memory planning passes are [NormalizeViewCopyBasePass(), ReplaceViewCopyWithMemoryViewPass(), config.to_out_var_pass, config.memory_planning_pass]; if remove_static_view_copy = False, the memory planning passes are [config.to_out_var_pass, config.memory_planning_pass] (state today). Let's look at the flow when remove_static_view_copy = True: NormalizeViewCopyBasePass(), ReplaceViewCopyWithMemoryViewPass(), config.to_out_var_pass, config.memory_planning_pass. The first two steps are the just the first and second diff described above. In config.to_out_var_pass, the memory.view nodes are skipped. In config.memory_planning_pass, when a spec is requested for a memory.view node (e.g., to update the lifetime), we return the spec of its base. Returning the spec for the base means that whenever we see a memory.view node, we actually update the lifetime of the base to cover it. Moreover, the memory.view node's special _MemoryViewSpec sees this update reflected. (Note that an exception would be thrown if we kept the usual flow and returned the spec for the memory.view node. This is because the special _MemoryViewSpec is immutable and would not allow the memory_planning_pass to update its lifetime.) Finally, during emission the memory.view is emitted as an evalue. There are two more diffs on the stack D54866523 and D54866539. The first of these replaces the old RemoveRedundantViewCopy pass with a NormalizeViewCopyBasePass + dead code elimination. The second converts view-like ops (squeeze, unsqueeze, slice) to view ops when safe to do so to take advantage of the view_copy elimination. Differential Revision: https://internalfb.com/D54827305
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. Reviewed By: larryliu0820, cbilgin Differential Revision: D55099757
This pull request was exported from Phabricator. Differential Revision: D55099757 |
7c53597
to
f41bb58
Compare
This pull request has been merged in 4b0ed91. |
Summary: Pull Request resolved: pytorch#2553 Implements a new view prim op kernel. bypass-github-export-checks Reviewed By: larryliu0820, cbilgin Differential Revision: D55099757 fbshipit-source-id: 92e44621f4d9b38ad6ecb2610cce4b765e650029
Summary: Implements a new view prim op kernel.
Reviewed By: larryliu0820
Differential Revision: D55099757