Skip to content

Commit cc7b690

Browse files
authored
Merge branch 'main' into move-back-to-stable-binaries-2.2
2 parents 00586d2 + 4fbf946 commit cc7b690

File tree

3 files changed

+26
-15
lines changed

3 files changed

+26
-15
lines changed

advanced_source/coding_ddpg.py

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -63,16 +63,25 @@
6363
# %%bash
6464
# pip3 install torchrl mujoco glfw
6565

66-
import torchrl
67-
import torch
68-
import tqdm
69-
from typing import Tuple
70-
7166
# sphinx_gallery_start_ignore
7267
import warnings
7368
warnings.filterwarnings("ignore")
69+
import multiprocessing
70+
# TorchRL prefers spawn method, that restricts creation of ``~torchrl.envs.ParallelEnv`` inside
71+
# `__main__` method call, but for the easy of reading the code switch to fork
72+
# which is also a default spawn method in Google's Colaboratory
73+
try:
74+
multiprocessing.set_start_method("fork")
75+
except RuntimeError:
76+
assert multiprocessing.get_start_method() == "fork"
7477
# sphinx_gallery_end_ignore
7578

79+
80+
import torchrl
81+
import torch
82+
import tqdm
83+
from typing import Tuple
84+
7685
###############################################################################
7786
# We will execute the policy on CUDA if available
7887
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
@@ -1219,6 +1228,6 @@ def ceil_div(x, y):
12191228
#
12201229
# To iterate further on this loss module we might consider:
12211230
#
1222-
# - Using `@dispatch` (see `[Feature] Distpatch IQL loss module <https://github.com/pytorch/rl/pull/1230>`_.
1231+
# - Using `@dispatch` (see `[Feature] Distpatch IQL loss module <https://github.com/pytorch/rl/pull/1230>`_.)
12231232
# - Allowing flexible TensorDict keys.
12241233
#

index.rst

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,11 @@ Welcome to PyTorch Tutorials
33

44
What's new in PyTorch tutorials?
55

6-
* `Getting Started with Distributed Checkpoint (DCP) <https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html>`__
7-
* `torch.export Tutorial <https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html>`__
8-
* `Facilitating New Backend Integration by PrivateUse1 <https://pytorch.org/tutorials/advanced/privateuseone.html>`__
9-
* `(prototype) Accelerating BERT with semi-structured (2:4) sparsity <https://pytorch.org/tutorials/prototype/semi_structured_sparse.html>`__
10-
* `(prototype) PyTorch 2 Export Quantization-Aware Training (QAT) <https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html>`__
11-
* `(prototype) PyTorch 2 Export Post Training Quantization with X86 Backend through Inductor <https://pytorch.org/tutorials/prototype/pt2e_quant_ptq_x86_inductor.html>`__
12-
* `(prototype) Inductor C++ Wrapper Tutorial <https://pytorch.org/tutorials/prototype/inductor_cpp_wrapper_tutorial.html>`__
13-
* `How to save memory by fusing the optimizer step into the backward pass <https://pytorch.org/tutorials/intermediate/optimizer_step_in_backward_tutorial.html>`__
14-
* `Tips for Loading an nn.Module from a Checkpoint <https://pytorch.org/tutorials/recipes/recipes/module_load_state_dict_tips.html>`__
6+
* `PyTorch Inference Performance Tuning on AWS Graviton Processors <https://pytorch.org/tutorials/recipes/inference_tuning_on_aws_graviton.html>`__
7+
* `Using TORCH_LOGS python API with torch.compile <https://pytorch.org/tutorials/recipes/torch_logs.html>`__
8+
* `PyTorch 2 Export Quantization with X86 Backend through Inductor <https://pytorch.org/tutorials/prototype/pt2e_quant_x86_inductor.html>`__
9+
* `Getting Started with DeviceMesh <https://pytorch.org/tutorials/recipes/distributed_device_mesh.html>`__
10+
* `Compiling the optimizer with torch.compile <https://pytorch.org/tutorials/recipes/compiling_optimizer.html>`__
1511

1612

1713
.. raw:: html

recipes_source/compiling_optimizer.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,3 +86,9 @@ Sample Results:
8686

8787
* Eager runtime: 747.2437149845064us
8888
* Compiled runtime: 392.07384741178us
89+
90+
See Also
91+
~~~~~~~~~
92+
93+
* For an in-depth technical overview, see
94+
`Compiling the optimizer with PT2 <https://dev-discuss.pytorch.org/t/compiling-the-optimizer-with-pt2/1669>`__

0 commit comments

Comments
 (0)