Skip to content

XNNPack --> XNNPACK #590

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backends/xnnpack/operators/op_to_copy.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def define_node(
to_contiguous = bool(memory_format_target == torch.contiguous_format)
check_or_raise(
to_channels_last or to_contiguous,
"Unsupported Memory Format for XNNPack",
"Unsupported Memory Format for XNNPACK",
)

input_node = get_input_node(node, 0)
Expand Down
2 changes: 1 addition & 1 deletion backends/xnnpack/passes/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def __init__(
self, exported_program: ExportedProgram, passes: Optional[List[PassType]] = None
) -> None:
"""
A helper class to run multiple XNNPack passes on a program
A helper class to run multiple XNNPACK passes on a program
If passes list is empty, all passes in XNNPACK will be run.
Else only run passes in the list will be run.
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@
# TODO(T151254305) use subgraph_rewriter
class ChannelsLastTaggedReshapePass(XNNPACKPass):
"""
This pass is Internal to XNNPack only! It is meant to give a new representation
of the edge graph to be consumed by XNNPack Preprocess. All added operators
This pass is Internal to XNNPACK only! It is meant to give a new representation
of the edge graph to be consumed by XNNPACK Preprocess. All added operators
will be consumed by delegate and turned to delegate blobs.
Edge IR graph pass to add operator stubs that signal a change in
memory format from contiguous to channels last. This is to help with
XNNPack Delegate to add transpose nodes to change input memory format
XNNPACK Delegate to add transpose nodes to change input memory format
at runtime and run operators in Channels Last Format.
During this pass, nhwc nodes are not converted back to nchw immediately.
Expand Down
4 changes: 2 additions & 2 deletions backends/xnnpack/runtime/XNNCompiler.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ bool isQuantizedDataType(const xnn_datatype data_type) {

/**
Converts dims from uint32 to size_t. Takes in a flatbuffer vector
of uint32_t and returns a std::vector of size_t. XNNPack takes in
of uint32_t and returns a std::vector of size_t. XNNPACK takes in
dims of size_t* but tensor shape is serialized in flatbuffer as
int32_t. As a result, we need to static cast the shapes to size_t
*/
Expand Down Expand Up @@ -143,7 +143,7 @@ Error defineTensor(
// to properly convert the uint32_t* to size_t*
std::vector<size_t> dims_data = flatbufferDimsToVector(tensor_value->dims());

// XNNPack Id
// XNNPACK Id
uint32_t id = XNN_INVALID_VALUE_ID;

// Get Pointer to constant data from flatbuffer, if its non-constant
Expand Down
2 changes: 1 addition & 1 deletion backends/xnnpack/runtime/XNNCompiler.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ namespace delegate {

class XNNCompiler {
public:
// Takes Flatbuffer Serialized XNNPack Model and rebuilds the xnn-subgraph
// Takes Flatbuffer Serialized XNNPACK Model and rebuilds the xnn-subgraph
// returns an executor object that holds the xnn runtime object which we
// can then use to set inputs and run inference using the xnn graph.
__ET_NODISCARD static Error compileModel(
Expand Down
2 changes: 1 addition & 1 deletion backends/xnnpack/test/test_xnnpack_passes.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
from torch.testing import FileCheck


class TestXNNPackPasses(unittest.TestCase):
class TestXNNPACKPasses(unittest.TestCase):
class TwoOutputs(OpSequencesAddConv2d):
def __init__(self):
super().__init__(1, 2)
Expand Down
2 changes: 1 addition & 1 deletion backends/xnnpack/test/test_xnnpack_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ def assert_outputs_equal(self, model_output, ref_output):
"""
Helper testing function that asserts that the model output and the reference output
are equal with some tolerance. Due to numerical differences between eager mode and
the XNNPack's backend, we relax the detal such that absolute tolerance is 1e-3. and
the XNNPACK's backend, we relax the detal such that absolute tolerance is 1e-3. and
relative tolerance is 1e-3.
"""

Expand Down
4 changes: 2 additions & 2 deletions backends/xnnpack/xnnpack_preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ def preprocess(
)
else:
raise RuntimeError(
f"For {node}, {node.op}:{node.target.__name__} is not supported in XNNPack Delegate"
f"For {node}, {node.op}:{node.target.__name__} is not supported in XNNPACK Delegate"
)
elif node.op in [
"get_attr",
Expand All @@ -234,5 +234,5 @@ def preprocess(
]:
continue
else:
raise RuntimeError(f"{node.op} is not supported in XNNPack")
raise RuntimeError(f"{node.op} is not supported in XNNPACK")
return PreprocessResult(processed_bytes=convert_to_flatbuffer(xnnpack_graph))