Skip to content

Fix inaccurate linear learner hyper-parameter and sagemaker version in doc and readme #120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Apr 3, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 12 additions & 6 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,30 +2,36 @@
CHANGELOG
=========

1.2.2
=====

* bug-fix: Estimators: fix valid range of hyper-parameter 'loss' in linear learner

1.2.1
========
=====

* bug-fix: Change Local Mode to use a sagemaker-local docker network

1.2.0
========
=====

* feature: Add Support for Local Mode
* feature: Estimators: add support for TensorFlow 1.6.0
* feature: Estimators: add support for MXNet 1.1.0
* feature: Frameworks: Use more idiomatic ECR repository naming scheme

1.1.3
========
=====

* bug-fix: TensorFlow: Display updated data correctly for TensorBoard launched from ``run_tensorboard_locally=True``
* feature: Tests: create configurable ``sagemaker_session`` pytest fixture for all integration tests
* bug-fix: AmazonEstimators: fix inaccurate hyper-parameters in kmeans, pca and linear learner
* feature: Add new hyperparameters for linear learner.
* bug-fix: Estimators: fix inaccurate hyper-parameters in kmeans, pca and linear learner
* feature: Estimators: Add new hyperparameters for linear learner.

1.1.2
=====

* bug-fix: AmazonEstimators: do not call create bucket if data location is provided
* bug-fix: Estimators: do not call create bucket if data location is provided

1.1.1
=====
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ You can install from source by cloning this repository and issuing a pip install

git clone https://github.com/aws/sagemaker-python-sdk.git
python setup.py sdist
pip install dist/sagemaker-1.1.3.tar.gz
pip install dist/sagemaker-1.2.2.tar.gz

Supported Python versions
~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def __getattr__(cls, name):
'tensorflow.python.framework', 'tensorflow_serving', 'tensorflow_serving.apis']
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)

version = '1.1.3'
version = '1.2.2'
project = u'sagemaker'

# Add any Sphinx extension module names here, as strings. They can be extensions
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def read(fname):


setup(name="sagemaker",
version="1.2.1",
version="1.2.2",
description="Open source library for training and deploying models on Amazon SageMaker.",
packages=find_packages('src'),
package_dir={'': 'src'},
Expand Down
5 changes: 3 additions & 2 deletions src/sagemaker/amazon/linear_learner.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,14 @@ class LinearLearner(AmazonAlgorithmEstimatorBase):
init_sigma = hp('init_sigma', gt(0), 'A float greater-than 0', float)
init_bias = hp('init_bias', (), 'A number', float)
optimizer = hp('optimizer', isin('sgd', 'adam', 'auto'), 'One of "sgd", "adam" or "auto', str)
loss = hp('loss', isin('logistic', 'squared_loss', 'absolute_loss', 'auto'),
loss = hp('loss', isin('logistic', 'squared_loss', 'absolute_loss', 'hinge_loss', 'eps_insensitive_squared_loss',
'eps_insensitive_absolute_loss', 'quantile_loss', 'huber_loss', 'auto'),
'"logistic", "squared_loss", "absolute_loss", "hinge_loss", "eps_insensitive_squared_loss", '
'"eps_insensitive_absolute_loss", "quantile_loss", "huber_loss" or "auto"', str)
wd = hp('wd', ge(0), 'A float greater-than or equal to 0', float)
l1 = hp('l1', ge(0), 'A float greater-than or equal to 0', float)
momentum = hp('momentum', (ge(0), lt(1)), 'A float in [0,1)', float)
learning_rate = hp('learning_rate', gt(0), 'A float greater-than or equal to 0', float)
learning_rate = hp('learning_rate', gt(0), 'A float greater-than 0', float)
beta_1 = hp('beta_1', (ge(0), lt(1)), 'A float in [0,1)', float)
beta_2 = hp('beta_2', (ge(0), lt(1)), 'A float in [0,1)', float)
bias_lr_mult = hp('bias_lr_mult', gt(0), 'A float greater-than 0', float)
Expand Down