Skip to content

Commit bebef85

Browse files
authored
Merge branch 'master' into master
2 parents dfa5aaa + f4dbcd3 commit bebef85

File tree

1,807 files changed

+431679
-15295
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,807 files changed

+431679
-15295
lines changed

.github/CODEOWNERS

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Each line is a file pattern followed by one or more owners (@username or
2+
# email@address).
3+
#
4+
# Order is important. The last matching pattern has the most precedence.
5+
# So if a pull request only touches a specific folder, only the respective owners
6+
# will be requested to review.
7+
#
8+
# @See https://help.github.com/articles/about-codeowners/
9+
10+
/sagemaker-experiments/* @aws/sagemakerexperimentsadmin
11+
/sagemaker-lineage/* @aws/sagemakerexperimentsadmin
12+
13+
# Community contributed
14+
/contrib/ @aws/sagemaker-notebook-sas

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,7 @@
11
**/.ipynb_checkpoints
22
**/.idea
33
**/__pycache__
4+
**/.aws-sam
45
.DS_Store
6+
7+
**/_build

.readthedocs.yml

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# ReadTheDocs environment customization to allow us to use conda to install
2+
# libraries which have C dependencies for the doc build. See:
3+
# https://docs.readthedocs.io/en/latest/config-file/v2.html
4+
5+
version: 2
6+
7+
conda:
8+
environment: environment.yml
9+
10+
python:
11+
version: 3.6
12+
13+
sphinx:
14+
configuration: conf.py
15+
fail_on_warning: false

Makefile

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Minimal makefile for Sphinx documentation
2+
#
3+
4+
# You can set these variables from the command line, and also
5+
# from the environment for the first two.
6+
SPHINXOPTS ?=
7+
SPHINXBUILD ?= sphinx-build
8+
SOURCEDIR = .
9+
BUILDDIR = _build
10+
11+
# Put it first so that "make" without argument is like "make help".
12+
help:
13+
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14+
15+
.PHONY: help Makefile
16+
17+
# Catch-all target: route all unknown targets to Sphinx using the new
18+
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
19+
%: Makefile
20+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

README.md

Lines changed: 148 additions & 14 deletions
Large diffs are not rendered by default.

_static/js/analytics.js

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
console.log("Starting analytics...");
2+
var s_code=s.t();if(s_code)document.write(s_code)
Loading

_static/sagemaker_gears.jpg

26.3 KB
Loading

advanced_functionality/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,6 @@ These examples that showcase unique functionality available in Amazon SageMaker.
1414
- [Bring Your Own R Algorithm](r_bring_your_own) shows how to bring your own algorithm container to Amazon SageMaker using the R language.
1515
- [Bring Your Own scikit Algorithm](scikit_bring_your_own) provides a detailed walkthrough on how to package a scikit learn algorithm for training and production-ready hosting.
1616
- [Bring Your Own MXNet Model](mxnet_mnist_byom) shows how to bring a model trained anywhere using MXNet into Amazon SageMaker
17-
- [Bring Your Own TensorFlow Model](tensorflow_iris_byom) shows how to bring a model trained anywhere using TensorFlow into Amazon SageMaker
17+
- [Bring Your Own TensorFlow Model](tensorflow_iris_byom) shows how to bring a model trained anywhere using TensorFlow into Amazon SageMaker
18+
- [Inference Pipeline with SparkML and XGBoost](inference_pipeline_sparkml_xgboost_abalone) shows how to deploy an Inference Pipeline with SparkML for data pre-processing and XGBoost for training on the Abalone dataset. The pre-processing code is written once and used between training and inference.
19+
- [Inference Pipeline with SparkML and BlazingText](inference_pipeline_sparkml_blazingtext_dbpedia) shows how to deploy an Inference Pipeline with SparkML for data pre-processing and BlazingText for training on the DBPedia dataset. The pre-processing code is written once and used between training and inference.

0 commit comments

Comments
 (0)