Skip to content

[lite interpreter][hack] Add batch_norm_update_stats if batchnorm and training are present #100134

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

JacobSzwejbka
Copy link
Contributor

Summary: not sure how the train bool to batch_norm gets set. But its not the is_training module level flag. We get weird behavior for teams trying to do on device training because of this

Test Plan: ci

Differential Revision: D45335791

@pytorch-bot pytorch-bot bot added the release notes: mobile release notes category label Apr 26, 2023
@pytorch-bot
Copy link

pytorch-bot bot commented Apr 26, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/100134

Note: Links to docs will display an error until the docs builds have been completed.

❗ 2 Active SEVs

There are 2 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit 9e5e611:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45335791

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45335791

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45335791

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45335791

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45335791

… training are present (pytorch#100134)

Summary:
Pull Request resolved: pytorch#100134

not sure how the train bool to batch_norm gets set. But its not the is_training module level flag. We get weird behavior for teams trying to do on device training because of this

Test Plan: ci

Reviewed By: Jack-Khuu

Differential Revision: D45335791

fbshipit-source-id: 726272178cf4b3ecb17a57e7937d9211ad465b50
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D45335791

@JacobSzwejbka
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 1, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

valentinandrei pushed a commit to valentinandrei/pytorch that referenced this pull request May 2, 2023
… training are present (pytorch#100134)

Summary: not sure how the train bool to batch_norm gets set. But its not the is_training module level flag. We get weird behavior for teams trying to do on device training because of this

Test Plan: ci

Differential Revision: D45335791

Pull Request resolved: pytorch#100134
Approved by: https://github.com/larryliu0820
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged merging release notes: mobile release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants