You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Removing GPTQ from all of torchchat
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
* Rebase + Add back accidental deletion
* Update Quant call using llama.cpp (#868)
llama.cpp did a BC breaking refactor: ggml-org/llama.cpp@1c641e6
resulting in some of our CI breaking
This updates our CI to match llama.cpp's schema
* Updating torch nightly to pick up aoti improvements in 128339 (#862)
* Updating torch nightly to pick up aoti improvements in 128339
* Update the torch version to 2.5
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
* Creating an initial Quantization Directory (#863)
* Initial Creation of a quantization directory
* Moving qops
* updating import
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
* Update Quant call using llama.cpp (#868)
llama.cpp did a BC breaking refactor: ggml-org/llama.cpp@1c641e6
resulting in some of our CI breaking
This updates our CI to match llama.cpp's schema
* Updating torch nightly to pick up aoti improvements in 128339 (#862)
* Updating torch nightly to pick up aoti improvements in 128339
* Update the torch version to 2.5
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
* Removing all references to HQQ (#869)
* Removing all references to HQQ
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
* Update Quant call using llama.cpp (#868)
llama.cpp did a BC breaking refactor: ggml-org/llama.cpp@1c641e6
resulting in some of our CI breaking
This updates our CI to match llama.cpp's schema
* Updating torch nightly to pick up aoti improvements in 128339 (#862)
* Updating torch nightly to pick up aoti improvements in 128339
* Update the torch version to 2.5
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
* Creating an initial Quantization Directory (#863)
* Initial Creation of a quantization directory
* Moving qops
* updating import
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
* Update Quant call using llama.cpp (#868)
llama.cpp did a BC breaking refactor: ggml-org/llama.cpp@1c641e6
resulting in some of our CI breaking
This updates our CI to match llama.cpp's schema
* Updating torch nightly to pick up aoti improvements in 128339 (#862)
* Updating torch nightly to pick up aoti improvements in 128339
* Update the torch version to 2.5
* Updating lm_eval version (#865)
Fixing CI related to EleutherAI/wikitext_document_level change requirements from using HF Datasets
* Pinning numpy to under 2.0 (#867)
0 commit comments