Skip to content

Commit 0e6b277

Browse files
Olivia-liufacebook-github-bot
authored andcommitted
fix minor typos (#786)
Summary: Pull Request resolved: #786 as titled Reviewed By: svekars Differential Revision: D50133473 fbshipit-source-id: 27080d25487725fdb190b53d851cc7a5c8b4b00c
1 parent a5a428a commit 0e6b277

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/examples-end-to-end-to-lower-model-to-delegate.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Audience: ML Engineers, who are interested in applying delegates to accelerate t
55
Backend delegation is an entry point for backends to process and execute PyTorch
66
programs to leverage performance and efficiency benefits of specialized
77
backends and hardware, while still providing PyTorch users with an experience
8-
close to that of the PyTorch runtime. The backend delegate is usually either provited by
8+
close to that of the PyTorch runtime. The backend delegate is usually either provided by
99
ExecuTorch or vendors. The way to leverage delegate in your program is via a standard entry point `to_backend`.
1010

1111

@@ -135,7 +135,7 @@ This function takes in a `Partitioner` which adds a tag to all the nodes that
135135
are meant to be lowered. It will return a `partition_tags` mapping tags to
136136
backend names and module compile specs. The tagged nodes will then be
137137
partitioned and lowered to their mapped backends using Flow 1's process.
138-
Available helper partitioner are documented
138+
Available helper partitioners are documented
139139
[here](./compiler-custom-compiler-passes.md). These lowered modules
140140
will be inserted into the top-level module and serialized.
141141

0 commit comments

Comments
 (0)