Skip to content

Commit 53aed84

Browse files
committed
Mention Python agnosticism in custom ops tutorial
1 parent 2793639 commit 53aed84

File tree

2 files changed

+46
-8
lines changed

2 files changed

+46
-8
lines changed

advanced_source/cpp_custom_ops.rst

Lines changed: 44 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -63,9 +63,42 @@ Using ``cpp_extension`` is as simple as writing the following ``setup.py``:
6363
6464
If you need to compile CUDA code (for example, ``.cu`` files), then instead use
6565
`torch.utils.cpp_extension.CUDAExtension <https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension>`_.
66-
Please see how
67-
`extension-cpp <https://github.com/pytorch/extension-cpp>`_ for an example for
68-
how this is set up.
66+
Please see `extension-cpp <https://github.com/pytorch/extension-cpp>`_ for an
67+
example for how this is set up.
68+
69+
In PyTorch 2.6 and later, if your custom library adheres to the `CPython stable
70+
Limited API <https://docs.python.org/3/c-api/stable.html>`_ or avoids CPython
71+
entirely, you can build one python agnostic wheel against a minimum supported
72+
CPython version through setuptools' ``py_limited_api`` flag, like so:
73+
74+
.. code-block:: python
75+
76+
from setuptools import setup, Extension
77+
from torch.utils import cpp_extension
78+
79+
setup(name="extension_cpp",
80+
ext_modules=[
81+
cpp_extension.CppExtension("extension_cpp", ["muladd.cpp"], py_limited_api=True)],
82+
cmdclass={'build_ext': cpp_extension.BuildExtension},
83+
options={"bdist_wheel": {"py_limited_api": "cp39"}}
84+
)
85+
86+
Note that you must specify ``py_limited_api=True`` both within ``setup``:
87+
and also as an option to the ``"bdist_wheel"`` command with the minimal supported
88+
Python version (in this case, 3.9). This ``setup`` would build one wheel that could
89+
be installed across multiple Python versions ``python>=3.9``. Please see
90+
`torchao <https://github.com/pytorch/ao>`_ for an example.
91+
92+
.. note::
93+
94+
You must verify independently that the built wheel is truly Python agnostic.
95+
Specifying ``py_limited_api`` does not check for any guarantees, so it is possible
96+
to build a wheel that looks Python agnostic but will crash, or worse, be silently
97+
incorrect, in another Python environment. Take care to avoid using unstable CPython
98+
APIs, for example APIs from libtorch_python (in particular pytorch/python bindings)
99+
and to only use APIs from libtorch (aten objects, operators and the dispatcher).
100+
For example, to give access to custom ops from python, the library should register
101+
the ops through the dispatcher (covered below!).
69102

70103
Defining the custom op and adding backend implementations
71104
---------------------------------------------------------
@@ -177,7 +210,7 @@ operator specifies how to compute the metadata of output tensors given the metad
177210
The FakeTensor kernel should return dummy Tensors of your choice with
178211
the correct Tensor metadata (shape/strides/``dtype``/device).
179212

180-
We recommend that this be done from Python via the `torch.library.register_fake` API,
213+
We recommend that this be done from Python via the ``torch.library.register_fake`` API,
181214
though it is possible to do this from C++ as well (see
182215
`The Custom Operators Manual <https://pytorch.org/docs/main/notes/custom_operators.html>`_
183216
for more details).
@@ -188,7 +221,9 @@ for more details).
188221
# before calling ``torch.library`` APIs that add registrations for the
189222
# C++ custom operator(s). The following import loads our
190223
# C++ custom operator definitions.
191-
# See the next section for more details.
224+
# Note that if you are striving for Python agnosticism, you should use
225+
# the ``load_library(...)`` API call instead. See the next section for
226+
# more details.
192227
from . import _C
193228
194229
@torch.library.register_fake("extension_cpp::mymuladd")
@@ -214,7 +249,10 @@ of two ways:
214249
1. If you're following this tutorial, importing the Python C extension module
215250
we created will load the C++ custom operator definitions.
216251
2. If your C++ custom operator is located in a shared library object, you can
217-
also use ``torch.ops.load_library("/path/to/library.so")`` to load it.
252+
also use ``torch.ops.load_library("/path/to/library.so")`` to load it. This
253+
is the blessed path for Python agnosticism, as you will not have a Python C
254+
extension module to import. See `torchao __init__.py <https://github.com/pytorch/ao/blob/881e84b4398eddcea6fee4d911fc329a38b5cd69/torchao/__init__.py#L26-L28>`
255+
for an example.
218256

219257

220258
Adding training (autograd) support for an operator

index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -426,14 +426,14 @@ Welcome to PyTorch Tutorials
426426

427427
.. customcarditem::
428428
:header: Custom C++ and CUDA Extensions
429-
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
429+
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
430430
:image: _static/img/thumbnails/cropped/Custom-Cpp-and-CUDA-Extensions.png
431431
:link: advanced/cpp_extension.html
432432
:tags: Extending-PyTorch,Frontend-APIs,C++,CUDA
433433

434434
.. customcarditem::
435435
:header: Extending TorchScript with Custom C++ Operators
436-
:card_description: Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
436+
:card_description: Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
437437
:image: _static/img/thumbnails/cropped/Extending-TorchScript-with-Custom-Cpp-Operators.png
438438
:link: advanced/torch_script_custom_ops.html
439439
:tags: Extending-PyTorch,Frontend-APIs,TorchScript,C++

0 commit comments

Comments
 (0)