Skip to content

Commit c4166be

Browse files
authored
Merge pull request #138 from awslabs/arpin_sagemarker_fixes
Fixed: Typos of sagemarker to sagemaker
2 parents bd4beee + de3590c commit c4166be

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

advanced_functionality/r_bring_your_own/r_bring_your_own.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@
9797
"\n",
9898
"### Fit\n",
9999
"\n",
100-
"`mars.R` creates functions to fit and serve our model. The algorithm we've chosen to use is [Multivariate Adaptive Regression Splines](https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines). This is a suitable example as it's a unique and powerful algorithm, but isn't as broadly used as Amazon SageMarker algorithms, and it isn't available in Python's scikit-learn library. R's repository of packages is filled with algorithms that share these same criteria. "
100+
"`mars.R` creates functions to fit and serve our model. The algorithm we've chosen to use is [Multivariate Adaptive Regression Splines](https://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines). This is a suitable example as it's a unique and powerful algorithm, but isn't as broadly used as Amazon SageMaker algorithms, and it isn't available in Python's scikit-learn library. R's repository of packages is filled with algorithms that share these same criteria. "
101101
]
102102
},
103103
{

advanced_functionality/working_with_redshift_data/working_with_redshift_data.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
"region = boto3.Session().region_name\n",
6060
"\n",
6161
"bucket='<your_s3_bucket_name_here>' # put your s3 bucket name here, and create s3 bucket\n",
62-
"prefix = 'sagemarker/redshift'\n",
62+
"prefix = 'sagemaker/redshift'\n",
6363
"# customize to your bucket where you have stored the data\n",
6464
"\n",
6565
"credfile = 'redshift_creds_template.json.nogit'"

introduction_to_amazon_algorithms/xgboost_mnist/xgboost_mnist.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
"region = boto3.Session().region_name\n",
7070
"\n",
7171
"bucket='<bucket-name>' # put your s3 bucket name here, and create s3 bucket\n",
72-
"prefix = 'sagemarker/xgboost-multiclass-classification'\n",
72+
"prefix = 'sagemaker/xgboost-multiclass-classification'\n",
7373
"# customize to your bucket where you have stored the data\n",
7474
"bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region,bucket)"
7575
]

introduction_to_applying_machine_learning/xgboost_customer_churn/xgboost_customer_churn.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -502,7 +502,7 @@
502502
"\n",
503503
"An important point here is that because of the `np.round()` function above we are using a simple threshold (or cutoff) of 0.5. Our predictions from `xgboost` come out as continuous values between 0 and 1 and we force them into the binary classes that we began with. However, because a customer that churns is expected to cost the company more than proactively trying to retain a customer who we think might churn, we should consider adjusting this cutoff. That will almost certainly increase the number of false positives, but it can also be expected to increase the number of true positives and reduce the number of false negatives.\n",
504504
"\n",
505-
"To get a rought intuition here, let's look at the continuous values of our predictions."
505+
"To get a rough intuition here, let's look at the continuous values of our predictions."
506506
]
507507
},
508508
{

0 commit comments

Comments
 (0)