Skip to content

Commit 873a2fd

Browse files
authored
Merge branch 'main' into sekyondaMeta-Deadlink-update-4
2 parents edb1e13 + 24c42d2 commit 873a2fd

File tree

2 files changed

+3
-4
lines changed

2 files changed

+3
-4
lines changed

beginner_source/ddp_series_theory.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
What is Distributed Data Parallel (DDP)
88
=======================================
99

10-
Authors: `Suraj Subramanian <https://github.com/suraj813>`__
10+
Authors: `Suraj Subramanian <https://github.com/subramen>`__
1111

1212
.. grid:: 2
1313

intermediate_source/process_group_cpp_extension_tutorial.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,8 @@ Basics
2525

2626
PyTorch collective communications power several widely adopted distributed
2727
training features, including
28-
`DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__,
29-
`ZeroRedundancyOptimizer <https://pytorch.org/docs/stable/distributed.optim.html#torch.distributed.optim.ZeroRedundancyOptimizer>`__,
30-
`FullyShardedDataParallel <https://github.com/pytorch/pytorch/blob/master/torch/distributed/_fsdp/fully_sharded_data_parallel.py>`__.
28+
`DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__ and
29+
`ZeroRedundancyOptimizer <https://pytorch.org/docs/stable/distributed.optim.html#torch.distributed.optim.ZeroRedundancyOptimizer>`__.
3130
In order to make the same collective communication API work with
3231
different communication backends, the distributed package abstracts collective
3332
communication operations into a

0 commit comments

Comments
 (0)