Skip to content

Commit 56c0b83

Browse files
authored
Update compiling_optimizer_lr_scheduler.py
1 parent 63d47f4 commit 56c0b83

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

recipes_source/compiling_optimizer_lr_scheduler.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
#########################################################
99
# The optimizer is a key algorithm for training any deep learning model.
10-
# In this example, we will show how to pair the an optimizer, which has been compiled using ``torch.compile``,
10+
# In this example, we will show how to pair the optimizer, which has been compiled using ``torch.compile``,
1111
# with the LR schedulers to accelerate training convergence.
1212
#
1313
# .. note::
@@ -100,13 +100,13 @@ def fn():
100100

101101
######################################################################
102102
# With this example, we can see that we recompile the optimizer a few times
103-
# due to the guard failure on the 'lr' in param_groups[0].
103+
# due to the guard failure on the ``lr`` in ``param_groups[0]``.
104104

105105
######################################################################
106106
# Conclusion
107107
# ~~~~~~~~~~
108108
#
109-
# In this tutorial we showed how to pair the ``torch.compile``d optimizer
109+
# In this tutorial we showed how to pair the optimizer compiled with ``torch.compile``
110110
# with an LR Scheduler to accelerate training convergence. We used a model consisting
111111
# of a simple sequence of linear layers with the Adam optimizer paired
112112
# with a LinearLR scheduler to demonstrate the LR changing across iterations.

0 commit comments

Comments
 (0)