Skip to content

change: include additional docstyle improvements #1889

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Oct 7, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pydocstylerc
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[pydocstyle]
inherit = false
ignore = D104,D107,D202,D203,D205,D209,D212,D213,D214,D400,D401,D404,D406,D407,D411,D413,D414,D415,D417
ignore = D104,D107,D202,D203,D205,D212,D213,D214,D400,D401,D404,D406,D407,D411,D413,D414,D415,D417
match = (?!record_pb2).*\.py
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/factorization_machines.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ class FactorizationMachines(AmazonAlgorithmEstimatorBase):
Factorization Machines combine the advantages of Support Vector Machines
with factorization models. It is an extension of a linear model that is
designed to capture interactions between features within high dimensional
sparse datasets economically."""
sparse datasets economically.
"""

repo_name = "factorization-machines"
repo_version = 1
Expand Down
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/ipinsights.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ class IPInsights(AmazonAlgorithmEstimatorBase):
"""An unsupervised learning algorithm that learns the usage patterns for IPv4 addresses.

It is designed to capture associations between IPv4 addresses and various entities, such
as user IDs or account numbers."""
as user IDs or account numbers.
"""

repo_name = "ipinsights"
repo_version = 1
Expand Down
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/kmeans.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ class KMeans(AmazonAlgorithmEstimatorBase):

As the result of KMeans, members of a group are as similar as possible to one another and as
different as possible from members of other groups. You define the attributes that you want
the algorithm to use to determine similarity."""
the algorithm to use to determine similarity.
"""

repo_name = "kmeans"
repo_version = 1
Expand Down
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/knn.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ class KNN(AmazonAlgorithmEstimatorBase):
For classification problems, the algorithm queries the k points that are closest to the sample
point and returns the most frequently used label of their class as the predicted label. For
regression problems, the algorithm queries the k closest points to the sample point and returns
the average of their feature values as the predicted value."""
the average of their feature values as the predicted value.
"""

repo_name = "knn"
repo_version = 1
Expand Down
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/lda.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ class LDA(AmazonAlgorithmEstimatorBase):
LDA is most commonly used to discover a
user-specified number of topics shared by documents within a text corpus. Here each
observation is a document, the features are the presence (or occurrence count) of each
word, and the categories are the topics."""
word, and the categories are the topics.
"""

repo_name = "lda"
repo_version = 1
Expand Down
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/linear_learner.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@ class LinearLearner(AmazonAlgorithmEstimatorBase):
For multiclass classification problems, the labels must be from 0 to num_classes - 1. For
regression problems, y is a real number. The algorithm learns a linear function, or, for
classification problems, a linear threshold function, and maps a vector x to an approximation
of the label y."""
of the label y.
"""

repo_name = "linear-learner"
repo_version = 1
Expand Down
5 changes: 3 additions & 2 deletions src/sagemaker/amazon/ntm.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,12 @@


class NTM(AmazonAlgorithmEstimatorBase):
"""An unsupervised learning algorithm used to organize a corpus of documents into topics
"""An unsupervised learning algorithm used to organize a corpus of documents into topics.

The resulting topics contain word groupings based on their statistical distribution.
Documents that contain frequent occurrences of words such as "bike", "car", "train",
"mileage", and "speed" are likely to share a topic on "transportation" for example."""
"mileage", and "speed" are likely to share a topic on "transportation" for example.
"""

repo_name = "ntm"
repo_version = 1
Expand Down
6 changes: 4 additions & 2 deletions src/sagemaker/amazon/object2vec.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@


def _list_check_subset(valid_super_list):
"""
"""Provides a function to check validity of list subset.

Args:
valid_super_list:
"""
Expand All @@ -45,7 +46,8 @@ class Object2Vec(AmazonAlgorithmEstimatorBase):

It can learn low-dimensional dense embeddings of high-dimensional objects. The embeddings
are learned in a way that preserves the semantics of the relationship between pairs of
objects in the original space in the embedding space."""
objects in the original space in the embedding space.
"""

repo_name = "object2vec"
repo_version = 1
Expand Down
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/pca.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,8 @@ class PCA(AmazonAlgorithmEstimatorBase):
"""An unsupervised machine learning algorithm to reduce feature dimensionality.

As a result, number of features within a dataset is reduced but the dataset still
retain as much information as possible."""
retain as much information as possible.
"""

repo_name = "pca"
repo_version = 1
Expand Down
3 changes: 2 additions & 1 deletion src/sagemaker/amazon/randomcutforest.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ class RandomCutForest(AmazonAlgorithmEstimatorBase):

These are observations which diverge from otherwise well-structured or patterned data.
Anomalies can manifest as unexpected spikes in time series data, breaks in periodicity,
or unclassifiable data points."""
or unclassifiable data points.
"""

repo_name = "randomcutforest"
repo_version = 1
Expand Down
37 changes: 21 additions & 16 deletions src/sagemaker/automl/automl.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@


class AutoML(object):
"""A class for creating and interacting with SageMaker AutoML jobs"""
"""A class for creating and interacting with SageMaker AutoML jobs."""

def __init__(
self,
Expand Down Expand Up @@ -178,14 +178,14 @@ def describe_auto_ml_job(self, job_name=None):
return self._auto_ml_job_desc

def best_candidate(self, job_name=None):
"""Returns the best candidate of an AutoML job for a given name
"""Returns the best candidate of an AutoML job for a given name.

Args:
job_name (str): The name of the AutoML job. If None, will use object's
_current_auto_ml_job_name.

Returns:
dict: a dictionary with information of the best candidate
dict: A dictionary with information of the best candidate.
"""
if self._best_candidate:
return self._best_candidate
Expand Down Expand Up @@ -229,7 +229,7 @@ def list_candidates(
between 1 to 100. Default to None. If None, will return all the candidates.

Returns:
list: A list of dictionaries with candidates information
list: A list of dictionaries with candidates information.
"""
if job_name is None:
job_name = self.current_job_name
Expand Down Expand Up @@ -262,8 +262,7 @@ def create_model(
predictor_cls=None,
inference_response_keys=None,
):
"""Creates a model from a given candidate or the best candidate
from the automl job
"""Creates a model from a given candidate or the best candidate from the job.

Args:
name (str): The pipeline model name.
Expand All @@ -289,7 +288,7 @@ def create_model(
keys will dictate the content order in the response.

Returns:
PipelineModel object
PipelineModel object.
"""
sagemaker_session = sagemaker_session or self.sagemaker_session

Expand Down Expand Up @@ -351,7 +350,7 @@ def deploy(
predictor_cls=None,
inference_response_keys=None,
):
"""Deploy a candidate to a SageMaker Inference Pipeline and return a Predictor
"""Deploy a candidate to a SageMaker Inference Pipeline.

Args:
initial_instance_count (int): The initial number of instances to run
Expand Down Expand Up @@ -490,12 +489,14 @@ def _get_supported_inference_keys(cls, container, default=None):

@classmethod
def _check_inference_keys(cls, inference_response_keys, containers):
"""Given an inference container list, checks if the pipeline supports the
requested inference keys
"""Checks if the pipeline supports the inference keys for the containers.

Given inference response keys and list of containers, determines whether
the keys are supported.

Args:
inference_response_keys (list): List of keys for inference response content
containers (list): list of inference container
inference_response_keys (list): List of keys for inference response content.
containers (list): list of inference container.

Raises:
ValueError, if one or more keys in inference_response_keys are not supported
Expand Down Expand Up @@ -527,8 +528,10 @@ def _check_inference_keys(cls, inference_response_keys, containers):

@classmethod
def validate_and_update_inference_response(cls, inference_containers, inference_response_keys):
"""Validates the requested inference keys and updates inference containers to emit the
requested content in the inference response.
"""Validates the requested inference keys and updates response content.

On validation, also updates the inference containers to emit appropriate response
content in the inference response.

Args:
inference_containers (list): list of inference containers
Expand Down Expand Up @@ -570,8 +573,10 @@ def validate_and_update_inference_response(cls, inference_containers, inference_


class AutoMLInput(object):
"""Accepts parameters that specify an S3 input for an auto ml job and provides
a method to turn those parameters into a dictionary."""
"""Accepts parameters that specify an S3 input for an auto ml job

Provides a method to turn those parameters into a dictionary.
"""

def __init__(self, inputs, target_attribute_name, compression=None):
"""Convert an S3 Uri or a list of S3 Uri to an AutoMLInput object.
Expand Down
11 changes: 7 additions & 4 deletions src/sagemaker/debugger.py
Original file line number Diff line number Diff line change
Expand Up @@ -320,11 +320,15 @@ def _to_request_dict(self):


class TensorBoardOutputConfig(object):
"""TensorBoardOutputConfig provides options to customize
debugging visualization using TensorBoard."""
"""A configuration object to provide debugging customizations.

A configuration specific to TensorBoard output, that provides options
to customize debugging visualizations using TensorBoard.
"""

def __init__(self, s3_output_path, container_local_output_path=None):
"""Initialize an instance of TensorBoardOutputConfig.

TensorBoardOutputConfig provides options to customize
debugging visualization using TensorBoard.

Expand All @@ -336,8 +340,7 @@ def __init__(self, s3_output_path, container_local_output_path=None):
self.container_local_output_path = container_local_output_path

def _to_request_dict(self):
"""Generates a request dictionary using the parameters provided
when initializing the object.
"""Generates a request dictionary using the instances attributes.

Returns:
dict: An portion of an API request as a dictionary.
Expand Down
Loading