Skip to content

Commit 7c2256d

Browse files
authored
Supporting blog content/using openelm models (#338)
* added notebook * removed unused file * black formatter * second attempt * 3rd attempt * ran make pre-commit for formatting fixes
1 parent 6225919 commit 7c2256d

File tree

4 files changed

+1450
-0
lines changed

4 files changed

+1450
-0
lines changed
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
Copyright (C) 2024 Apple Inc. All Rights Reserved.
2+
3+
Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple
4+
Inc. ("Apple") in consideration of your agreement to the following
5+
terms, and your use, installation, modification or redistribution of
6+
this Apple software constitutes acceptance of these terms. If you do
7+
not agree with these terms, please do not use, install, modify or
8+
redistribute this Apple software.
9+
10+
In consideration of your agreement to abide by the following terms, and
11+
subject to these terms, Apple grants you a personal, non-exclusive
12+
license, under Apple's copyrights in this original Apple software (the
13+
"Apple Software"), to use, reproduce, modify and redistribute the Apple
14+
Software, with or without modifications, in source and/or binary forms;
15+
provided that if you redistribute the Apple Software in its entirety and
16+
without modifications, you must retain this notice and the following
17+
text and disclaimers in all such redistributions of the Apple Software.
18+
Neither the name, trademarks, service marks or logos of Apple Inc. may
19+
be used to endorse or promote products derived from the Apple Software
20+
without specific prior written permission from Apple. Except as
21+
expressly stated in this notice, no other rights or licenses, express or
22+
implied, are granted by Apple herein, including but not limited to any
23+
patent rights that may be infringed by your derivative works or by other
24+
works in which the Apple Software may be incorporated.
25+
26+
The Apple Software is provided by Apple on an "AS IS" basis. APPLE
27+
MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION
28+
THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS
29+
FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND
30+
OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.
31+
32+
IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL
33+
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
34+
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
35+
INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION,
36+
MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED
37+
AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE),
38+
STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE
39+
POSSIBILITY OF SUCH DAMAGE.
40+
41+
42+
-------------------------------------------------------------------------------
43+
SOFTWARE DISTRIBUTED IN THIS REPOSITORY:
44+
45+
This software includes a number of subcomponents with separate
46+
copyright notices and license terms - please see the file ACKNOWLEDGEMENTS.
47+
-------------------------------------------------------------------------------
Lines changed: 216 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,216 @@
1+
---
2+
license: other
3+
license_name: apple-sample-code-license
4+
license_link: LICENSE
5+
---
6+
7+
# OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
8+
9+
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10+
11+
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12+
13+
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
14+
15+
See the list below for the details of each model:
16+
17+
- [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M)
18+
- [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M)
19+
- [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B)
20+
- [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B)
21+
- [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct)
22+
- [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct)
23+
- [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct)
24+
- [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct)
25+
26+
27+
```python
28+
29+
from transformers import AutoModelForCausalLM
30+
31+
openelm_270m = AutoModelForCausalLM.from_pretrained("apple/OpenELM-270M", trust_remote_code=True)
32+
openelm_450m = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M", trust_remote_code=True)
33+
openelm_1b = AutoModelForCausalLM.from_pretrained("apple/OpenELM-1_1B", trust_remote_code=True)
34+
openelm_3b = AutoModelForCausalLM.from_pretrained("apple/OpenELM-3B", trust_remote_code=True)
35+
36+
openelm_270m_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-270M-Instruct", trust_remote_code=True)
37+
openelm_450m_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-450M-Instruct", trust_remote_code=True)
38+
openelm_1b_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-1_1B-Instruct", trust_remote_code=True)
39+
openelm_3b_instruct = AutoModelForCausalLM.from_pretrained("apple/OpenELM-3B-Instruct", trust_remote_code=True)
40+
41+
```
42+
43+
## Usage
44+
45+
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
46+
47+
You can try the model by running the following command:
48+
```
49+
python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
50+
```
51+
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
52+
53+
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
54+
```
55+
python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
56+
```
57+
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
58+
```
59+
python generate_openelm.py --model [MODEL_NAME] --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL_NAME]
60+
```
61+
62+
63+
## Main Results
64+
65+
### Zero-Shot
66+
67+
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
68+
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
69+
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
70+
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
71+
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
72+
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
73+
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
74+
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
75+
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
76+
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
77+
78+
### LLM360
79+
80+
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
81+
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
82+
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
83+
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
84+
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
85+
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
86+
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
87+
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
88+
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
89+
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
90+
91+
92+
### OpenLLM Leaderboard
93+
94+
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
95+
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
96+
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
97+
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
98+
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
99+
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
100+
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
101+
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
102+
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
103+
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
104+
105+
See the technical report for more results and comparison.
106+
107+
## Evaluation
108+
109+
### Setup
110+
111+
Install the following dependencies:
112+
113+
```bash
114+
115+
# install public lm-eval-harness
116+
117+
harness_repo="public-lm-eval-harness"
118+
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
119+
cd ${harness_repo}
120+
# use main branch on 03-15-2024, SHA is dc90fec
121+
git checkout dc90fec
122+
pip install -e .
123+
cd ..
124+
125+
# 66d6242 is the main branch on 2024-04-01
126+
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
127+
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
128+
129+
```
130+
131+
### Evaluate OpenELM
132+
133+
```bash
134+
135+
# OpenELM-270M
136+
hf_model=apple/OpenELM-270M
137+
138+
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
139+
tokenizer=meta-llama/Llama-2-7b-hf
140+
add_bos_token=True
141+
batch_size=1
142+
143+
mkdir lm_eval_output
144+
145+
shot=0
146+
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
147+
lm_eval --model hf \
148+
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
149+
--tasks ${task} \
150+
--device cuda:0 \
151+
--num_fewshot ${shot} \
152+
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
153+
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
154+
155+
shot=5
156+
task=mmlu,winogrande
157+
lm_eval --model hf \
158+
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
159+
--tasks ${task} \
160+
--device cuda:0 \
161+
--num_fewshot ${shot} \
162+
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
163+
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
164+
165+
shot=25
166+
task=arc_challenge,crows_pairs_english
167+
lm_eval --model hf \
168+
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
169+
--tasks ${task} \
170+
--device cuda:0 \
171+
--num_fewshot ${shot} \
172+
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
173+
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
174+
175+
shot=10
176+
task=hellaswag
177+
lm_eval --model hf \
178+
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
179+
--tasks ${task} \
180+
--device cuda:0 \
181+
--num_fewshot ${shot} \
182+
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
183+
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
184+
185+
```
186+
187+
188+
## Bias, Risks, and Limitations
189+
190+
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
191+
192+
## Citation
193+
194+
If you find our work useful, please cite:
195+
196+
```BibTex
197+
@article{mehtaOpenELMEfficientLanguage2024,
198+
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
199+
shorttitle = {{OpenELM}},
200+
url = {https://arxiv.org/abs/2404.14619v1},
201+
language = {en},
202+
urldate = {2024-04-24},
203+
journal = {arXiv.org},
204+
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
205+
month = apr,
206+
year = {2024},
207+
}
208+
209+
@inproceedings{mehta2022cvnets,
210+
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
211+
title = {CVNets: High Performance Library for Computer Vision},
212+
year = {2022},
213+
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
214+
series = {MM '22}
215+
}
216+
```

0 commit comments

Comments
 (0)