Skip to content

Commit e5e95fc

Browse files
committed
Removing Conda install references.
1 parent a5632da commit e5e95fc

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

advanced_source/sharding.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,10 @@ Requirements: - python >= 3.7
1414
We highly recommend CUDA when using torchRec. If using CUDA: - cuda >=
1515
11.0
1616

17+
.. Should these be updated?
1718
.. code:: python
1819
19-
# install conda to make installying pytorch with cudatoolkit 11.3 easier.
20+
# install conda to make installying pytorch with cudatoolkit 11.3 easier.
2021
!sudo rm Miniconda3-py37_4.9.2-Linux-x86_64.sh Miniconda3-py37_4.9.2-Linux-x86_64.sh.*
2122
!sudo wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.9.2-Linux-x86_64.sh
2223
!sudo chmod +x Miniconda3-py37_4.9.2-Linux-x86_64.sh
@@ -213,7 +214,7 @@ embedding table placement using planner and generate sharded model using
213214
)
214215
sharders = [cast(ModuleSharder[torch.nn.Module], EmbeddingBagCollectionSharder())]
215216
plan: ShardingPlan = planner.collective_plan(module, sharders, pg)
216-
217+
217218
sharded_model = DistributedModelParallel(
218219
module,
219220
env=ShardingEnv.from_process_group(pg),
@@ -234,7 +235,7 @@ ranks.
234235
.. code:: python
235236
236237
import multiprocess
237-
238+
238239
def spmd_sharing_simulation(
239240
sharding_type: ShardingType = ShardingType.TABLE_WISE,
240241
world_size = 2,
@@ -254,7 +255,7 @@ ranks.
254255
)
255256
p.start()
256257
processes.append(p)
257-
258+
258259
for p in processes:
259260
p.join()
260261
assert 0 == p.exitcode
@@ -333,4 +334,3 @@ With data parallel, we will repeat the tables for all devices.
333334
334335
rank:0,sharding plan: {'': {'large_table_0': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None), 'large_table_1': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None), 'small_table_0': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None), 'small_table_1': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None)}}
335336
rank:1,sharding plan: {'': {'large_table_0': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None), 'large_table_1': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None), 'small_table_0': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None), 'small_table_1': ParameterSharding(sharding_type='data_parallel', compute_kernel='batched_dense', ranks=[0, 1], sharding_spec=None)}}
336-

0 commit comments

Comments
 (0)