Skip to content

Commit 8cfcd9c

Browse files
committed
add ref link
1 parent 89e8665 commit 8cfcd9c

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

doc/api/training/smd_model_parallel_release_notes/smd_model_parallel_change_log.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,8 @@ SageMaker Distributed Model Parallel 1.15.0 Release Notes
2828
``smp.save_checkpoint`` with ``partial=False``.
2929
Before, full checkpoints needed to be created by merging partial checkpoint
3030
files after training finishes.
31-
* ``DistributedTransformer`` now supports the ALiBi position embeddings.
31+
* `DistributedTransformer <https://sagemaker.readthedocs.io/en/stable/api/training/smp_versions/latest/smd_model_parallel_pytorch_tensor_parallel.html#smdistributed.modelparallel.torch.nn.DistributedTransformerLayer>`_
32+
now supports the ALiBi position embeddings.
3233
When using DistributedTransformer, you can set the ``use_alibi`` parameter
3334
to ``True`` to use the Triton-based flash attention kernels. This helps
3435
evaluate sequences longer than those used for training.

0 commit comments

Comments
 (0)