Skip to content

Commit f19dca0

Browse files
authored
devops : RPM Specs (#2723)
* Create llama-cpp.srpm * Rename llama-cpp.srpm to llama-cpp.srpm.spec Correcting extension. * Tested spec success. * Update llama-cpp.srpm.spec * Create lamma-cpp-cublas.srpm.spec * Create lamma-cpp-clblast.srpm.spec * Update lamma-cpp-cublas.srpm.spec Added BuildRequires * Moved to devops dir
1 parent 8207214 commit f19dca0

File tree

3 files changed

+175
-0
lines changed

3 files changed

+175
-0
lines changed

.devops/lamma-cpp-clblast.srpm.spec

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# SRPM for building from source and packaging an RPM for RPM-based distros.
2+
# https://fedoraproject.org/wiki/How_to_create_an_RPM_package
3+
# Built and maintained by John Boero - [email protected]
4+
# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal
5+
6+
# Notes for llama.cpp:
7+
# 1. Tags are currently based on hash - which will not sort asciibetically.
8+
# We need to declare standard versioning if people want to sort latest releases.
9+
# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies.
10+
# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed.
11+
# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo
12+
# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries.
13+
# It is up to the user to install the correct vendor-specific support.
14+
15+
Name: llama.cpp-clblast
16+
Version: master
17+
Release: 1%{?dist}
18+
Summary: OpenCL Inference of LLaMA model in pure C/C++
19+
License: MIT
20+
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz
21+
BuildRequires: coreutils make gcc-c++ git mesa-libOpenCL-devel
22+
URL: https://github.com/ggerganov/llama.cpp
23+
24+
%define debug_package %{nil}
25+
%define source_date_epoch_from_changelog 0
26+
27+
%description
28+
CPU inference for Meta's Lllama2 models using default options.
29+
30+
%prep
31+
%setup -n llama.cpp-master
32+
33+
%build
34+
make -j LLAMA_CLBLAST=1
35+
36+
%install
37+
mkdir -p %{buildroot}%{_bindir}/
38+
cp -p main %{buildroot}%{_bindir}/llamacppclblast
39+
cp -p server %{buildroot}%{_bindir}/llamacppclblastserver
40+
cp -p simple %{buildroot}%{_bindir}/llamacppclblastsimple
41+
42+
%clean
43+
rm -rf %{buildroot}
44+
rm -rf %{_builddir}/*
45+
46+
%files
47+
%{_bindir}/llamacppclblast
48+
%{_bindir}/llamacppclblastserver
49+
%{_bindir}/llamacppclblastsimple
50+
51+
%pre
52+
53+
%post
54+
55+
%preun
56+
%postun
57+
58+
%changelog

.devops/lamma-cpp-cublas.srpm.spec

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# SRPM for building from source and packaging an RPM for RPM-based distros.
2+
# https://fedoraproject.org/wiki/How_to_create_an_RPM_package
3+
# Built and maintained by John Boero - [email protected]
4+
# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal
5+
6+
# Notes for llama.cpp:
7+
# 1. Tags are currently based on hash - which will not sort asciibetically.
8+
# We need to declare standard versioning if people want to sort latest releases.
9+
# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies.
10+
# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed.
11+
# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo
12+
# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries.
13+
# It is up to the user to install the correct vendor-specific support.
14+
15+
Name: llama.cpp-cublas
16+
Version: master
17+
Release: 1%{?dist}
18+
Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL)
19+
License: MIT
20+
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz
21+
BuildRequires: coreutils make gcc-c++ git cuda-toolkit
22+
Requires: cuda-toolkit
23+
URL: https://github.com/ggerganov/llama.cpp
24+
25+
%define debug_package %{nil}
26+
%define source_date_epoch_from_changelog 0
27+
28+
%description
29+
CPU inference for Meta's Lllama2 models using default options.
30+
31+
%prep
32+
%setup -n llama.cpp-master
33+
34+
%build
35+
make -j LLAMA_CUBLAS=1
36+
37+
%install
38+
mkdir -p %{buildroot}%{_bindir}/
39+
cp -p main %{buildroot}%{_bindir}/llamacppcublas
40+
cp -p server %{buildroot}%{_bindir}/llamacppcublasserver
41+
cp -p simple %{buildroot}%{_bindir}/llamacppcublassimple
42+
43+
%clean
44+
rm -rf %{buildroot}
45+
rm -rf %{_builddir}/*
46+
47+
%files
48+
%{_bindir}/llamacppcublas
49+
%{_bindir}/llamacppcublasserver
50+
%{_bindir}/llamacppcublassimple
51+
52+
%pre
53+
54+
%post
55+
56+
%preun
57+
%postun
58+
59+
%changelog

.devops/llama-cpp.srpm.spec

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# SRPM for building from source and packaging an RPM for RPM-based distros.
2+
# https://fedoraproject.org/wiki/How_to_create_an_RPM_package
3+
# Built and maintained by John Boero - [email protected]
4+
# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal
5+
6+
# Notes for llama.cpp:
7+
# 1. Tags are currently based on hash - which will not sort asciibetically.
8+
# We need to declare standard versioning if people want to sort latest releases.
9+
# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies.
10+
# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed.
11+
# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo
12+
# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries.
13+
# It is up to the user to install the correct vendor-specific support.
14+
15+
Name: llama.cpp
16+
Version: master
17+
Release: 1%{?dist}
18+
Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL)
19+
License: MIT
20+
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz
21+
BuildRequires: coreutils make gcc-c++ git
22+
URL: https://github.com/ggerganov/llama.cpp
23+
24+
%define debug_package %{nil}
25+
%define source_date_epoch_from_changelog 0
26+
27+
%description
28+
CPU inference for Meta's Lllama2 models using default options.
29+
30+
%prep
31+
%autosetup
32+
33+
%build
34+
make -j
35+
36+
%install
37+
mkdir -p %{buildroot}%{_bindir}/
38+
cp -p main %{buildroot}%{_bindir}/llamacpp
39+
cp -p server %{buildroot}%{_bindir}/llamacppserver
40+
cp -p simple %{buildroot}%{_bindir}/llamacppsimple
41+
42+
%clean
43+
rm -rf %{buildroot}
44+
rm -rf %{_builddir}/*
45+
46+
%files
47+
%{_bindir}/llamacpp
48+
%{_bindir}/llamacppserver
49+
%{_bindir}/llamacppsimple
50+
51+
%pre
52+
53+
%post
54+
55+
%preun
56+
%postun
57+
58+
%changelog

0 commit comments

Comments
 (0)