-
Notifications
You must be signed in to change notification settings - Fork 608
Starter Task 1: Get learning rate for llm_pte_finetuning example from config file #11445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Starter Task 1: Get learning rate for llm_pte_finetuning example from config file #11445
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11445
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit a5b9633 with merge base 2bb567f ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D75807517 |
This PR needs a
|
This pull request was exported from Phabricator. Differential Revision: D75807517 |
… config file (pytorch#11445) Summary: Pull Request resolved: pytorch#11445 1. Cloned Repository: Cloned the Qwen2-0.5B-Instruct repository from HuggingFace onto local desktop. 2. Uploaded Files to OD Server: Uploaded cloned files to OD server (57651.od.fbinfra.net) at /tmp/Qwen2-0.5B-Instruct. 3. Created a New Directory to save the pte model: Created a new directory model under fbcode/executorch/examples/llm_pte_finetuning. 4. Configured Learning Rate: Updated qwen_05b_config.yaml file to set the learning rate to 5e-3 under model config parameters (line 22). 5. Updated Runner Script: Modified runner.py file to use the learning rate from cfg.model.learning_rate (line 87). Reviewed By: silverguo Differential Revision: D75807517
719586f
to
9989be2
Compare
… config file (pytorch#11445) Summary: 1. **Cloned Repository**: Cloned the relevant repository from a source onto the local desktop. 2. **Uploaded Files to Server**: Uploaded the cloned files to the server at `/tmp/Qwen2-0.5B-Instruct`. 3. **Created a New Directory for Model**: Created a new directory named `model` under `fbcode/executorch/examples/llm_pte_finetuning`. 4. **Configured Learning Rate**: Updated the configuration file to set the `learning_rate` to `5e-3` under model configuration parameters. 5. **Updated Runner Script**: Modified the script to use the learning rate from the configuration settings. Reviewed By: mcr229, silverguo Differential Revision: D75807517
e71295b
to
eb1a0d1
Compare
This pull request was exported from Phabricator. Differential Revision: D75807517 |
… config file (pytorch#11445) Summary: 1. **Cloned Repository**: Cloned the relevant repository from a source onto the local desktop. 2. **Uploaded Files to Server**: Uploaded the cloned files to the server at `/tmp/Qwen2-0.5B-Instruct`. 3. **Created a New Directory for Model**: Created a new directory named `model` under `fbcode/executorch/examples/llm_pte_finetuning`. 4. **Configured Learning Rate**: Updated the configuration file to set the `learning_rate` to `5e-3` under model configuration parameters. 5. **Updated Runner Script**: Modified the script to use the learning rate from the configuration settings. Reviewed By: mcr229, silverguo Differential Revision: D75807517
eb1a0d1
to
a416537
Compare
… config file (pytorch#11445) Summary: Pull Request resolved: pytorch#11445 1. **Cloned Repository**: Cloned the relevant repository from a source onto the local desktop. 2. **Uploaded Files to Server**: Uploaded the cloned files to the server at `/tmp/Qwen2-0.5B-Instruct`. 3. **Created a New Directory for Model**: Created a new directory named `model` under `fbcode/executorch/examples/llm_pte_finetuning`. 4. **Configured Learning Rate**: Updated the configuration file to set the `learning_rate` to `5e-3` under model configuration parameters. 5. **Updated Runner Script**: Modified the script to use the learning rate from the configuration settings. Reviewed By: mcr229, silverguo Differential Revision: D75807517
This pull request was exported from Phabricator. Differential Revision: D75807517 |
a416537
to
a5b9633
Compare
Summary:
Reviewed By: silverguo
Differential Revision: D75807517