Skip to content
This repository was archived by the owner on Jun 15, 2023. It is now read-only.

Commit 4bddcc2

Browse files
author
Soham Pal
committed
Doc refresh 09-30-22
1 parent 84818d7 commit 4bddcc2

File tree

54 files changed

+1364
-338
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+1364
-338
lines changed

doc_source/IC-TF-HowItWorks.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
# How TensorFlow Image Classification Works<a name="IC-TF-HowItWorks"></a>
1+
# How Image Classification \- TensorFlow Works<a name="IC-TF-HowItWorks"></a>
22

3-
The TensorFlow Image Classification algorithm takes an image as input and classifies it into one of the output class labels\. Various deep learning networks such as MobileNet, ResNet, Inception, and EfficientNet are highly accurate for image classification\. There are also deep learning networks that are trained on large image datasets, such as ImageNet, which has over 11 million images and close to 11,000 classes\. After a network is trained with ImageNet data, you can then fine\-tune the network on a dataset with a particular focus to perform more specific classification tasks\. The Amazon SageMaker TensorFlow Image Classification algorithm supports transfer learning on many pretrained models that are available in the TensorFlow Hub\.
3+
The Image Classification \- TensorFlow algorithm takes an image as input and classifies it into one of the output class labels\. Various deep learning networks such as MobileNet, ResNet, Inception, and EfficientNet are highly accurate for image classification\. There are also deep learning networks that are trained on large image datasets, such as ImageNet, which has over 11 million images and almost 11,000 classes\. After a network is trained with ImageNet data, you can then fine\-tune the network on a dataset with a particular focus to perform more specific classification tasks\. The Amazon SageMaker Image Classification \- TensorFlow algorithm supports transfer learning on many pretrained models that are available in the TensorFlow Hub\.
44

55
According to the number of class labels in your training data, a classification layer is attached to the pretrained Transfer Family Hub model of your choice\. The classification layer consists of a dropout layer, a dense layer, and a fully\-connected layer with 2\-norm regularizer that is initialized with random weights\. The model has hyperparameters for the dropout rate of the dropout layer and the L2 regularization factor for the dense layer\. You can then fine\-tune either the entire network \(including the pretrained model\) or only the top classification layer on new training data\. With this method of transfer learning, training with smaller datasets is possible\.

doc_source/IC-TF-Hyperparameter.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,25 @@
1-
# TensorFlow Image Classification Hyperparameters<a name="IC-TF-Hyperparameter"></a>
1+
# Image Classification \- TensorFlow Hyperparameters<a name="IC-TF-Hyperparameter"></a>
22

3-
Hyperparameters are parameters that are set before a machine learning model begins learning\. The following hyperparameters are supported by the Amazon SageMaker built\-in Image Classification \- TensorFlow algorithm\. See [Tune a TensorFlow Image Classification Model](IC-TF-tuning.md) for information on hyperparameter tuning\.
3+
Hyperparameters are parameters that are set before a machine learning model begins learning\. The following hyperparameters are supported by the Amazon SageMaker built\-in Image Classification \- TensorFlow algorithm\. See [Tune an Image Classification \- TensorFlow model](IC-TF-tuning.md) for information on hyperparameter tuning\.
44

55

66
| Parameter Name | Description |
77
| --- | --- |
88
| augmentation | Set to `"True"` to apply `augmentation_random_flip`, `augmentation_random_rotation`, and `augmentation_random_zoom` to the training data\. Valid values: string, either: \(`"True"` or `"False"`\)\. Default value: `"False"`\. |
99
| augmentation\_random\_flip | Indicates which flip mode to use for data augmentation when `augmentation` is set to `"True"`\. For more information, see [RandomFlip](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomFlip) in the TensorFlow documentation\. Valid values: string, any of the following: \(`"horizontal_and_vertical"`, `"vertical"`, or `"None"`\)\. Default value: `"horizontal_and_vertical"`\. |
10-
| augmentation\_random\_rotation | Indicates how much rotation to use for data augmentation when `augmentation` is set to `"True"`\. Values represent a fraction of 2π\. Positive values rotate counter clock\-wise while negative values rotate clockwise\. `0` means no rotation\. For more information, see [RandomRotation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomRotation) in the TensorFlow documentation\. Valid values: float, range: \[`-1.0`, `1.0`\]\. Default value: `0.2`\. |
10+
| augmentation\_random\_rotation | Indicates how much rotation to use for data augmentation when `augmentation` is set to `"True"`\. Values represent a fraction of 2π\. Positive values rotate counterclockwise while negative values rotate clockwise\. `0` means no rotation\. For more information, see [RandomRotation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomRotation) in the TensorFlow documentation\. Valid values: float, range: \[`-1.0`, `1.0`\]\. Default value: `0.2`\. |
1111
| augmentation\_random\_zoom | Indicates how much vertical zoom to use for data augmentation when `augmentation` is set to `"True"`\. Positive values zoom out while negative values zoom in\. `0` means no zoom\. For more information, see [RandomZoom](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomZoom) in the TensorFlow documentation\. Valid values: float, range: \[`-1.0`, `1.0`\]\. Default value: `0.1`\. |
1212
| batch\_size | The batch size for training\. For training on instances with multiple GPUs, this batch size is used across the GPUs\. Valid values: positive integer\. Default value: `32`\. |
1313
| beta\_1 | The beta1 for the `"adam"` optimizer\. Represents the exponential decay rate for the first moment estimates\. Ignored for other optimizers\. Valid values: float, range: \[`0.0`, `1.0`\]\. Default value: `0.9`\. |
1414
| beta\_2 | The beta2 for the `"adam"` optimizer\. Represents the exponential decay rate for the second moment estimates\. Ignored for other optimizers\. Valid values: float, range: \[`0.0`, `1.0`\]\. Default value: `0.999`\. |
15+
| binary\_mode | When `binary_mode` is set to `"True"`, the model returns a single probability number for the positive class and can use additional `eval_metric` options\. Use only for binary classification problems\. Valid values: string, either: \(`"True"` or `"False"`\)\. Default value: `"False"`\. |
1516
| dropout\_rate | The dropout rate for the dropout layer in the top classification layer\. Valid values: float, range: \[`0.0`, `1.0`\]\. Default value: `0.2` |
1617
| early\_stopping | Set to `"True"` to use early stopping logic during training\. If `"False"`, early stopping is not used\. Valid values: string, either: \(`"True"` or `"False"`\)\. Default value: `"False"`\. |
1718
| early\_stopping\_min\_delta | The minimum change needed to qualify as an improvement\. An absolute change less than the value of early\_stopping\_delta does not qualify as improvement\. Used only when early\_stopping is set to "True"\.Valid values: float, range: \[`0.0`, `1.0`\]\.Default value: `0.0`\. |
1819
| early\_stopping\_patience | The number of epochs to continue training with no improvement\. Used only when `early_stopping` is set to `"True"`\. Valid values: positive integer\. Default value: `5`\. |
1920
| epochs | The number of training epochs\. Valid values: positive integer\. Default value: `3`\. |
2021
| epsilon | The epsilon for `"adam"`, `"rmsprop"`, `"adadelta"`, and `"adagrad"` optimizers\. Usually set to a small value to avoid division by 0\. Ignored for other optimizers\. Valid values: float, range: \[`0.0`, `1.0`\]\. Default value: `1e-7`\. |
22+
| eval\_metric | If `binary_mode` is set to `"False"`, `eval_metric` can only be `"accuracy"`\. If `binary_mode` is `"True"`, select any of the valid values\. For more information, see [Metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) in the TensorFlow documentation\. Valid values: string, any of the following: \(`"accuracy"`, `"precision"`, `"recall"`, `"auc"`, or `"prc"`\)\. Default value: `"accuracy"`\. |
2123
| image\_resize\_interpolation | Indicates interpolation method used when resizing images\. For more information, see [image\.resize](https://www.tensorflow.org/api_docs/python/tf/image/resize) in the TensorFlow documentation\. Valid values: string, any of the following: \(`"bilinear"`, `"nearest"`, `"bicubic"`, `"area"`,` "lanczos3"` , `"lanczos5"`, `"gaussian"`, or `"mitchellcubic"`\)\. Default value: `"bilinear"`\. |
2224
| initial\_accumulator\_value | The starting value for the accumulators, or the per\-parameter momentum values, for the `"adagrad"` optimizer\. Ignored for other optimizers\. Valid values: float, range: \[`0.0`, `1.0`\]\. Default value: `0.0001`\. |
2325
| label\_smoothing | Indicates how much to relax the confidence on label values\. For example, if `label_smoothing` is `0.1`, then non\-target labels are `0.1/num_classes `and target labels are `0.9+0.1/num_classes`\. Valid values: float, range: \[`0.0`, `1.0`\]\. Default value: `0.1`\. |

doc_source/IC-TF-Models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# TensorFlow Hub Models<a name="IC-TF-Models"></a>
22

3-
The following pretrained models are available to use for transfer learning with the TensorFlow Image Classification algorithm\.
3+
The following pretrained models are available to use for transfer learning with the Image Classification \- TensorFlow algorithm\.
44

55
The following models vary significantly in size, number of model parameters, training time, and inference latency for any given dataset\. The best model for your use case depends on the complexity of your fine\-tuning dataset and any requirements that you have on training time, inference latency, or model accuracy\.
66

doc_source/IC-TF-tuning.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
# Tune a TensorFlow Image Classification Model<a name="IC-TF-tuning"></a>
1+
# Tune an Image Classification \- TensorFlow model<a name="IC-TF-tuning"></a>
22

33
*Automatic model tuning*, also known as hyperparameter tuning, finds the best version of a model by running many jobs that test a range of hyperparameters on your dataset\. You choose the tunable hyperparameters, a range of values for each, and an objective metric\. You choose the objective metric from the metrics that the algorithm computes\. Automatic model tuning searches the hyperparameters chosen to find the combination of values that result in the model that optimizes the objective metric\.
44

55
For more information about model tuning, see [Perform Automatic Model Tuning with SageMaker](automatic-model-tuning.md)\.
66

7-
## Metrics computed by the TensorFlow Image Classification algorithm<a name="IC-TF-metrics"></a>
7+
## Metrics computed by the Image Classification \- TensorFlow algorithm<a name="IC-TF-metrics"></a>
88

99
The image classification algorithm is a supervised algorithm\. It reports an accuracy metric that is computed during training\. When tuning the model, choose this metric as the objective metric\.
1010

@@ -13,11 +13,11 @@ The image classification algorithm is a supervised algorithm\. It reports an acc
1313
| --- | --- | --- |
1414
| validation:accuracy | The ratio of the number of correct predictions to the total number of predictions made\. | Maximize |
1515

16-
## Tunable TensorFlow Image Classification hyperparameters<a name="IC-TF-tunable-hyperparameters"></a>
16+
## Tunable Image Classification \- TensorFlow hyperparameters<a name="IC-TF-tunable-hyperparameters"></a>
1717

1818
Tune an image classification model with the following hyperparameters\. The hyperparameters that have the greatest impact on image classification objective metrics are: `batch_size`, `learning_rate`, and `optimizer`\. Tune the optimizer\-related hyperparameters, such as `momentum`, `regularizers_l2`, `beta_1`, `beta_2`, and `eps` based on the selected `optimizer`\. For example, use `beta_1` and `beta_2` only when `adam` is the `optimizer`\.
1919

20-
For more information about which hyperparameters are used for each `optimizer`, see [TensorFlow Image Classification Hyperparameters](IC-TF-Hyperparameter.md)\.
20+
For more information about which hyperparameters are used for each `optimizer`, see [Image Classification \- TensorFlow Hyperparameters](IC-TF-Hyperparameter.md)\.
2121

2222

2323
| Parameter Name | Parameter Type | Recommended Ranges |

0 commit comments

Comments
 (0)