-
Notifications
You must be signed in to change notification settings - Fork 162
change: move script mode branch to master #234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Scriptmode with cpu docker py2 and py3 docker file * Migrate to sagemaker-containers 2.1 * Remove serving related packages and code from container * Add py3 container * Add integ and unit tests for script mode * Remove non-asci characters from README * Changes based on pr comments * Move conftest to test root dir * Add default values for test args * add docker-compose to test requirement
* Add tox.ini and configure coverage and flake runs * Add more unit tests * Configure unit tests to run with both py2 and py3 * Add flake checks * Fix broken integ tests * Add import style check * Add .flake8 * Add source module in coverage command * Add newlines
* Add mnist sagemaker tests * Use account-id instead of ecr-image * Merge gpu and cpu sagemaker tests * remove _run_mnist_training
* Add Script Mode example
* Add benchmarking script
* edited tf script mode notebook
* Implement distributed support * Launch parameter server if user set sagemaker_parameter_server_enabled to be True * Add integ tests * Add unit tests * Add distributed sagemaker integ test * Add 1.11.0 and modify Dockerfile to reduce image size
* Add CI configuration files
* Setting S3 environment variables before training starts * Remove S3 environment variable setting in test training script * Add unit tests
* Update sagemaker containers
* Add wait False to run-ps
* Unset CUDA_VISIBLE_DEVICES for worker processes * Add comments
The tests all passed not sure why the sagemaker tests are not reporting success.
* Add Keras support
* Create parameter server in different thread * Fixing some integ tests
This test is only configured to run with 'local'. Change it to use the correct instance type accordingly.
…the test (aws#134) * Skip keras local mode test on gpu
This is for compatibility with a recent SageMaker Containers change: https://github.com/aws/sagemaker-containers/pull/157/files#diff-25848f52cb812bb370f854bffb2e7b40
Need this change for container release. Change is only to disable tests
* Add S3 plugin tests TensorFlow's S3 plugin doesn't work well with S3's eventual consistency model so we have seen training job failing due to checkpoint or model exporting to S3. Recently we have released our prod containers with a S3 plugin patch. This should reduce or eliminate such errors. The Test added writes a checkpoint to S3 after every training step. It fails with vanilla TensorFlow. * Remove distributed_mnist.py * Fix line too long
This test shouldn't save checkpoints since the two hosts are justing running training jobs independently. The checkpoints interfere with each other. Changing the test to use the Keras mnist script here. This change also changed the saved model path to /opt/ml/opt so we can just use the estimator.model_data path to assert the model exists.
* Use the test argement framework_version in all tests * Make flake8 happy
* this change fixes module import errors in the test directory when running with Python2.7 * reduce max training steps in the mnist test from 1000 to 200 in order to shorten test runtime
Add placeholder in test commands for cpu-instance-type and aws-id.
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
chuyang-deng
approved these changes
Sep 19, 2019
# If the training job is part of the multiple training jobs for tuning, we need to append the training job name to | ||
# model_dir in case they read from/write to the same object | ||
if '_tuning_objective_metric' in hyperparameters: | ||
model_dir = _model_dir_with_training_job(hyperparameters.get('model_dir'), env.job_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems that hyperparameters
does not have 'model_dir' while running my tuning job. Should hyperparameters
here be user_hyperparameters
instead? @mvsusp
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of changes:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.