Skip to content
This repository was archived by the owner on Jun 15, 2023. It is now read-only.

Commit bc25a68

Browse files
committed
doc refresh Jul 22 2022
1 parent 8b5afbe commit bc25a68

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+794
-232
lines changed

doc_source/IC-Hyperparameter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Hyperparameters are parameters that are set before a machine learning model begi
1818
| epochs | Number of training epochs\. **Optional** Valid values: positive integer Default value: 30 |
1919
| eps | The epsilon for `adam` and `rmsprop`\. It is usually set to a small value to avoid division by 0\. **Optional** Valid values: float\. Range in \[0, 1\]\. Default value: 1e\-8 |
2020
| gamma | The gamma for `rmsprop`, the decay factor for the moving average of the squared gradient\. **Optional** Valid values: float\. Range in \[0, 1\]\. Default value: 0\.9 |
21-
| image\_shape | The input image dimensions, which is the same size as the input layer of the network\. The format is defined as '`num_channels`, height, width'\. The image dimension can take on any value as the network can handle varied dimensions of the input\. However, there may be memory constraints if a larger image dimension is used\. Pretrained models can use only a fixed 224 x 224 image size\. Typical image dimensions for image classification are '3, 224, 224'\. This is similar to the ImageNet dataset\. For training, if any input image is smaller than this parameter in any dimension, training fails\. If an image is larger, a portion of the image is cropped, with the cropped area specified by this parameter\. If hyperparameter `augmentation_type` is set, random crop is taken; otherwise, central crop is taken\. At inference, input images are resized to the `image_shape` that was used during training\. Aspect ratio is not preserved, and images are not cropped\. **Optional** Valid values: string Default value: ‘3, 224, 224’ |
21+
| image\_shape | The input image dimensions, which is the same size as the input layer of the network\. The format is defined as '`num_channels`, height, width'\. The image dimension can take on any value as the network can handle varied dimensions of the input\. However, there may be memory constraints if a larger image dimension is used\. Pretrained models can use only a fixed 224 x 224 image size\. Typical image dimensions for image classification are '3,224,224'\. This is similar to the ImageNet dataset\. For training, if any input image is smaller than this parameter in any dimension, training fails\. If an image is larger, a portion of the image is cropped, with the cropped area specified by this parameter\. If hyperparameter `augmentation_type` is set, random crop is taken; otherwise, central crop is taken\. At inference, input images are resized to the `image_shape` that was used during training\. Aspect ratio is not preserved, and images are not cropped\. **Optional** Valid values: string Default value: ‘3,224,224’ |
2222
| kv\_store | Weight update synchronization mode during distributed training\. The weight updates can be updated either synchronously or asynchronously across machines\. Synchronous updates typically provide better accuracy than asynchronous updates but can be slower\. See distributed training in MXNet for more details\. This parameter is not applicable to single machine training\. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sagemaker/latest/dg/IC-Hyperparameter.html) **Optional** Valid values: `dist_sync` or `dist_async` Default value: no default value |
2323
| learning\_rate | Initial learning rate\. **Optional** Valid values: float\. Range in \[0, 1\]\. Default value: 0\.1 |
2424
| lr\_scheduler\_factor | The ratio to reduce learning rate used in conjunction with the `lr_scheduler_step` parameter, defined as `lr_new` = `lr_old` \* `lr_scheduler_factor`\. **Optional** Valid values: float\. Range in \[0, 1\]\. Default value: 0\.1 |

doc_source/algorithms-choose.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The following sections provide guidance concerning implementation options, machi
2323

2424
After choosing an algorithm, you must decide which implementation of it you want to use\. Amazon SageMaker supports three implementation options that require increasing levels of effort\.
2525
+ **Pre\-trained models** require the least effort and are models ready to deploy or to fine\-tune and deploy using SageMaker JumpStart\.
26-
+ **Built\-in algorithms** require the more effort and scale if the data set is large and significant resources are needed to train and deploy the model\.
26+
+ **Built\-in algorithms** require more effort and scale if the data set is large and significant resources are needed to train and deploy the model\.
2727
+ If there is no built\-in solution that works, try to develop one that uses **pre\-made images for machine and deep learning frameworks** for supported frameworks such as Scikit\-Learn, TensorFlow, PyTorch, MXNet, or Chainer\.
2828
+ If you need to run custom packages or use any code which isn’t a part of a supported framework or available via PyPi, then you need to build **your own custom Docker image** that is configured to install the necessary packages or software\. The custom image must also be pushed to an online repository like the Amazon Elastic Container Registry\.
2929

doc_source/autogluon-tabular-hyperparameters.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22

33
The following table contains the subset of hyperparameters that are required or most commonly used for the Amazon SageMaker AutoGluon\-Tabular algorithm\. Users set these parameters to facilitate the estimation of model parameters from data\. The SageMaker AutoGluon\-Tabular algorithm is an implementation of the open\-source [AutoGluon\-Tabular](https://github.com/awslabs/autogluon) package\.
44

5+
**Note**
6+
The default hyperparameters are based on example datasets in the [AutoGluon\-Tabular sample notebooks](autogluon-tabular.md#autogluon-tabular-sample-notebooks)\.
7+
58
The SageMaker AutoGluon\-Tabular algorithm automatically chooses an evaluation metric based on the type of classification problem\. The algorithm detects the type of classification problem based on the number of labels in your data\. For regression problems, the evaluation metric is root mean squared error\. For binary classification problems, the evaluation metric is area under the receiver operating characteristic curve \(AUC\)\. For multiclass classification problems, the evaluation metric is accuracy\.
69

710
**Note**

0 commit comments

Comments
 (0)