Skip to content

Commit 0a2d10a

Browse files
author
EC2 Default User
committed
edit ground_truth_labeling_jobs/ground_truth_object_detection_tutorial/object_detection_tutorial.ipynb
1 parent f59ddc6 commit 0a2d10a

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

ground_truth_labeling_jobs/ground_truth_object_detection_tutorial/object_detection_tutorial.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@
3636
"cell_type": "markdown",
3737
"metadata": {},
3838
"source": [
39-
"# Introduction\n",
39+
"## Introduction\n",
4040
"\n",
4141
"This sample notebook takes you through an end-to-end workflow to demonstrate the functionality of SageMaker Ground Truth. We'll start with an unlabeled image data set, acquire bounding boxes for objects in the images using SageMaker Ground Truth, analyze the results, train an object detector, host the resulting model, and, finally, use it to make predictions. Before you begin, we highly recommend you start a Ground Truth labeling job through the AWS Console first to familiarize yourself with the workflow. The AWS Console offers less flexibility than the API, but is simple to use.\n",
4242
"\n",
@@ -114,7 +114,7 @@
114114
"cell_type": "markdown",
115115
"metadata": {},
116116
"source": [
117-
"# Run a Ground Truth labeling job\n",
117+
"## Run a Ground Truth labeling job\n",
118118
"\n",
119119
"**This section should take about 4 hours to complete.**\n",
120120
"\n",
@@ -760,7 +760,7 @@
760760
"cell_type": "markdown",
761761
"metadata": {},
762762
"source": [
763-
"# Analyze Ground Truth labeling job results\n",
763+
"## Analyze Ground Truth labeling job results\n",
764764
"**This section should take about 20 minutes to complete.**\n",
765765
"\n",
766766
"Once the job has finished, we can analyze the results. Evaluate the following cell and verify the output is `'Completed'` before continuing."
@@ -1083,7 +1083,7 @@
10831083
"cell_type": "markdown",
10841084
"metadata": {},
10851085
"source": [
1086-
"# Compare Ground Truth results to standard labels\n",
1086+
"## Compare Ground Truth results to standard labels\n",
10871087
"\n",
10881088
"**This section should take about 5 minutes to complete.**\n",
10891089
"\n",
@@ -1366,7 +1366,7 @@
13661366
"cell_type": "markdown",
13671367
"metadata": {},
13681368
"source": [
1369-
"# Train an object detection model using Ground Truth labels\n",
1369+
"## Train an object detection model using Ground Truth labels\n",
13701370
"At this stage, we have fully labeled our dataset and we can train a machine learning model to perform object detection. We'll do so using the **augmented manifest** output of our labeling job - no additional file translation or manipulation required! For a more complete description of the augmented manifest, see our other [example notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/ground_truth_labeling_jobs/object_detection_augmented_manifest_training/object_detection_augmented_manifest_training.ipynb).\n",
13711371
"\n",
13721372
"**NOTE:** Object detection is a complex task, and training neural networks to high accuracy requires large datasets and careful hyperparameter tuning. The following cells illustrate how to train a neural network using a Ground Truth output augmented manifest, and how to interpret the results. However, we shouldn't expect a network trained on 100 or 1000 images to do a phenomenal job on unseen images!\n",
@@ -1644,7 +1644,7 @@
16441644
"cell_type": "markdown",
16451645
"metadata": {},
16461646
"source": [
1647-
"# Deploy the Model \n",
1647+
"## Deploy the Model \n",
16481648
"\n",
16491649
"Now that we've fully labeled our dataset and have a trained model, we want to use the model to perform inference.\n",
16501650
"\n",
@@ -2024,7 +2024,7 @@
20242024
"cell_type": "markdown",
20252025
"metadata": {},
20262026
"source": [
2027-
"# Review\n",
2027+
"## Review\n",
20282028
"\n",
20292029
"We covered a lot of ground in this notebook! Let's recap what we accomplished. First we started with an unlabeled dataset (technically, the dataset was previously labeled by the authors of the dataset, but we discarded the original labels for the purposes of this demonstration). Next, we created a SageMake Ground Truth labeling job and generated new labels for all of the images in our dataset. Then we split this file into a training set and a validation set and trained a SageMaker object detection model. Next, we trained a new model using these Ground Truth results and submitted a batch job to label a held-out image from the original dataset. Finally, we created a hosted model endpoint and used it to make a live prediction for the same held-out image."
20302030
]

0 commit comments

Comments
 (0)