|
10 | 10 | (both ideas and code) from his repo. You can never go wrong by
|
11 | 11 | following Andrej's work.
|
12 | 12 |
|
13 |
| -* Bert Maher and his [llama2.so](https://github.com/bertmaher/llama2.so), |
14 |
| - which built on Andrej's llama2.c and closed the loop on Llama models with |
15 |
| - AOTInductor. |
| 13 | +* Michael Gschwind, Bert Maher, Scott Wolchok, Bin Bao, Chen Yang, |
| 14 | + Huamin Li and Mu-Chu Li who built the first version of nanogpt (`DSOGPT`) |
| 15 | + with AOT Inductor proving that AOTI can be used to build efficient |
| 16 | + LLMs, and DSOs are a viable distribution format for models. |
| 17 | + [nanoGPT](https://github.com/karpathy/nanoGPT). |
| 18 | + |
| 19 | +* Bert Maher and his |
| 20 | + [llama2.so](https://github.com/bertmaher/llama2.so), which built on |
| 21 | + Andrej's llama2.c and on DSOGPT to close the loop on Llama models |
| 22 | + with AOTInductor. |
16 | 23 |
|
17 | 24 | * Christian Puhrsch, Horace He, Joe Isaacson and many more for their
|
18 | 25 | many contributions in Accelerating GenAI models in the *"Anything,
|
19 | 26 | Fast!"* pytorch.org blogs, and, in particular, Horace He for [GPT,
|
20 | 27 | Fast!](https://github.com/pytorch-labs/gpt-fast), which we have
|
21 | 28 | directly adopted (both ideas and code) from his repo.
|
22 | 29 |
|
23 |
| -* Bert Maher, Scott Wolchok, Bin Bao, Chen Yang, Huamin Li and Mu-Chu |
24 |
| - Li for great collaborations in building AOTInductor for CPU including |
25 |
| - for [nanoGPT](https://github.com/karpathy/nanoGPT). |
26 |
| - |
27 | 30 | * Mobius Labs as the authors of the HQQ quantization algorithms
|
28 | 31 | included in this distribution.
|
| 32 | + |
0 commit comments