Skip to content

platform independent script to verify sha256 checksums #1203

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 3, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 20 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -371,29 +371,37 @@ python3 convert.py models/gpt4all-7B/gpt4all-lora-quantized.bin

- The newer GPT4All-J model is not yet supported!

### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
### Obtaining the Facebook LLaMA original model and Stanford Alpaca model data

- **Under no circumstances should IPFS, magnet links, or any other links to model downloads be shared anywhere in this repository, including in issues, discussions, or pull requests. They will be immediately deleted.**
- The LLaMA models are officially distributed by Facebook and will **never** be provided through this repository.
- Refer to [Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to request access to the model data.
- Please verify the [sha256 checksums](SHA256SUMS) of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
- The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:

`sha256sum --ignore-missing -c SHA256SUMS` on Linux
### Verifying the model files

or
Please verify the [sha256 checksums](SHA256SUMS) of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
- The following python script will verify if you have all possible latest files in your self-installed `./models` subdirectory:

`shasum -a 256 --ignore-missing -c SHA256SUMS` on macOS
```bash
# run the verification script
python3 .\scripts\verify-checksum-models.py
```

- On linux or macOS it is also possible to run the following commands to verify if you have all possible latest files in your self-installed `./models` subdirectory:
- On Linux: `sha256sum --ignore-missing -c SHA256SUMS`
- on macOS: `shasum -a 256 --ignore-missing -c SHA256SUMS`

### Seminal papers and background on the models

- If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
- LLaMA:
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
- GPT-3
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
- GPT-3.5 / InstructGPT / ChatGPT:
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)

### Perplexity (measuring model quality)

Expand Down
78 changes: 78 additions & 0 deletions scripts/verify-checksum-models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import os
import hashlib

def sha256sum(file):
block_size = 16 * 1024 * 1024 # 16 MB block size
b = bytearray(block_size)
file_hash = hashlib.sha256()
mv = memoryview(b)
with open(file, 'rb', buffering=0) as f:
while True:
n = f.readinto(mv)
if not n:
break
file_hash.update(mv[:n])

return file_hash.hexdigest()

# Define the path to the llama directory (parent folder of script directory)
llama_path = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir))

# Define the file with the list of hashes and filenames
hash_list_file = os.path.join(llama_path, "SHA256SUMS")

# Check if the hash list file exists
if not os.path.exists(hash_list_file):
print(f"Hash list file not found: {hash_list_file}")
exit(1)

# Read the hash file content and split it into an array of lines
with open(hash_list_file, "r") as f:
hash_list = f.read().splitlines()

# Create an array to store the results
results = []

# Loop over each line in the hash list
for line in hash_list:
# Split the line into hash and filename
hash_value, filename = line.split(" ")

# Get the full path of the file by joining the llama path and the filename
file_path = os.path.join(llama_path, filename)

# Informing user of the progress of the integrity check
print(f"Verifying the checksum of {file_path}")

# Check if the file exists
if os.path.exists(file_path):
# Calculate the SHA256 checksum of the file using hashlib
file_hash = sha256sum(file_path)

# Compare the file hash with the expected hash
if file_hash == hash_value:
valid_checksum = "V"
file_missing = ""
else:
valid_checksum = ""
file_missing = ""
else:
valid_checksum = ""
file_missing = "X"

# Add the results to the array
results.append({
"filename": filename,
"valid checksum": valid_checksum,
"file missing": file_missing
})


# Print column headers for results table
print("\n" + "filename".ljust(40) + "valid checksum".center(20) + "file missing".center(20))
print("-" * 80)

# Output the results as a table
for r in results:
print(f"{r['filename']:40} {r['valid checksum']:^20} {r['file missing']:^20}")