Datasets:
File size: 7,504 Bytes
74f8a12 4f9884f 74f8a12 4f9884f 288d898 4f9884f 74f8a12 4f9884f 56f015f 288d898 5051e66 288d898 5051e66 288d898 5051e66 4f9884f 288d898 4f9884f 288d898 5051e66 288d898 5051e66 288d898 5051e66 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 288d898 4f9884f 5051e66 4f9884f 5051e66 4f9884f 5051e66 288d898 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | ---
license: mit
task_categories:
- text-generation
tags:
- llama-cpp
- llama-cpp-python
- wheels
- prebuilt
- cpu
- gpu
- manylinux
- gguf
- inference
pretty_name: "llama-cpp-python Prebuilt Wheels"
size_categories:
- 1K<n<10K
---
# π llama-cpp-python Prebuilt Wheels
**The most complete collection of prebuilt `llama-cpp-python` wheels for manylinux x86_64.**
Stop compiling. Start inferencing.
```bash
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
```
## π What's Inside
| | Count |
|---|---|
| **Total Wheels** | 3,794+ |
| **Versions** | 0.3.0 β 0.3.16 (17 versions) |
| **Python** | 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 |
| **Platform** | `manylinux_2_31_x86_64` |
| **Backends** | 8 |
| **CPU Profiles** | 13+ flag combinations |
## β‘ Backends
| Backend | Tag | Description |
|---------|-----|-------------|
| **OpenBLAS** | `openblas` | CPU BLAS acceleration β best general-purpose choice |
| **Intel MKL** | `mkl` | Intel Math Kernel Library β fastest on Intel CPUs |
| **Basic** | `basic` | No BLAS β maximum compatibility, no extra dependencies |
| **Vulkan** | `vulkan` | Universal GPU acceleration β works on NVIDIA, AMD, Intel |
| **CLBlast** | `clblast` | OpenCL GPU acceleration |
| **SYCL** | `sycl` | Intel GPU acceleration (Data Center, Arc, iGPU) |
| **OpenCL** | `opencl` | Generic OpenCL GPU backend |
| **RPC** | `rpc` | Distributed inference over network |
## π₯οΈ CPU Optimization Profiles
Wheels are built with specific CPU instruction sets enabled. Pick the one that matches your hardware:
| CPU Tag | Instructions | Best For |
|---------|-------------|----------|
| `basic` | None | Any x86-64 CPU (maximum compatibility) |
| `avx` | AVX | Sandy Bridge+ (2011) |
| `avx_f16c` | AVX + F16C | Ivy Bridge+ (2012) |
| `avx2_fma_f16c` | AVX2 + FMA + F16C | **Haswell+ (2013) β most common** |
| `avx2_fma_f16c_avxvnni` | AVX2 + FMA + F16C + AVX-VNNI | Alder Lake+ (2021) |
| `avx512_fma_f16c` | AVX-512 + FMA + F16C | Skylake-X+ (2017) |
| `avx512_fma_f16c_vnni` | + AVX512-VNNI | Cascade Lake+ (2019) |
| `avx512_fma_f16c_vnni_vbmi` | + AVX512-VBMI | Ice Lake+ (2019) |
| `avx512_fma_f16c_vnni_vbmi_bf16_amx` | + BF16 + AMX | Sapphire Rapids+ (2023) |
### How to Pick the Right Wheel
**Don't know your CPU?** Start with `avx2_fma_f16c` β it works on any CPU from 2013 onwards (Intel Haswell, AMD Ryzen, and newer).
**Want maximum compatibility?** Use `basic` β works on literally any x86-64 CPU.
**Have a server CPU?** Check if it supports AVX-512:
```bash
grep -o 'avx[^ ]*\|fma\|f16c\|bmi2\|sse4_2' /proc/cpuinfo | sort -u
```
## π¦ Filename Format
All wheels follow the [PEP 440](https://peps.python.org/pep-0440/) local version identifier standard:
```
llama_cpp_python-{version}+{backend}_{cpu_flags}-{python}-{python}-{platform}.whl
```
Examples:
```
llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
llama_cpp_python-0.3.16+basic-cp310-cp310-manylinux_2_31_x86_64.whl
```
The local version label (`+openblas_avx2_fma_f16c`) encodes:
- **Backend**: `openblas`, `mkl`, `basic`, `vulkan`, `clblast`, `sycl`, `opencl`, `rpc`
- **CPU flags** (in order): `avx`, `avx2`, `avx512`, `fma`, `f16c`, `vnni`, `vbmi`, `bf16`, `avxvnni`, `amx`
## π Quick Start
### CPU (OpenBLAS + AVX2 β recommended for most users)
```bash
sudo apt-get install libopenblas-dev
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
```
### GPU (Vulkan β works on any GPU vendor)
```bash
sudo apt-get install libvulkan1
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl
```
### Basic (zero dependencies)
```bash
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic-cp311-cp311-manylinux_2_31_x86_64.whl
```
### Example Usage
```python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="Qwen/Qwen2.5-Coder-7B-Instruct-GGUF",
filename="*q4_k_m.gguf",
n_ctx=4096,
)
output = llm.create_chat_completion(
messages=[{"role": "user", "content": "Write a Python hello world"}],
max_tokens=256,
)
print(output["choices"][0]["message"]["content"])
```
## π§ Runtime Dependencies
| Backend | Required Packages |
|---------|------------------|
| OpenBLAS | `libopenblas0` (runtime) or `libopenblas-dev` (build) |
| MKL | Intel oneAPI MKL |
| Vulkan | `libvulkan1` |
| CLBlast | `libclblast1` |
| OpenCL | `ocl-icd-libopencl1` |
| Basic | **None** |
| SYCL | Intel oneAPI DPC++ runtime |
| RPC | Network access to RPC server |
## π How These Wheels Are Built
These wheels are built by the **Ultimate Llama Wheel Factory** β a distributed build system running entirely on free HuggingFace Spaces:
| Component | Link |
|-----------|------|
| π Dispatcher | [wheel-factory-dispatcher](https://huggingface.co/spaces/AIencoder/wheel-factory-dispatcher) |
| βοΈ Workers 1-4 | [wheel-factory-worker-1](https://huggingface.co/spaces/AIencoder/wheel-factory-worker-1) ... 4 |
| π Auditor | [wheel-factory-auditor](https://huggingface.co/spaces/AIencoder/wheel-factory-auditor) |
The factory uses explicit cmake flags matching llama.cpp's official CPU variant builds:
```
CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS -DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON -DGGML_AVX=OFF -DGGML_AVX512=OFF -DGGML_NATIVE=OFF"
```
Every flag is set explicitly (no cmake defaults) to ensure reproducible, deterministic builds.
## β FAQ
**Q: Which wheel should I use?**
For most people: `openblas_avx2_fma_f16c` with your Python version. It's fast, works on 90%+ of modern CPUs, and only needs `libopenblas`.
**Q: Can I use these on Ubuntu / Debian / Fedora / Arch?**
Yes β `manylinux_2_31` wheels work on any Linux distro with glibc 2.31 or newer (Ubuntu 20.04+, Debian 11+, Fedora 34+, Arch).
**Q: What about Windows / macOS / CUDA wheels?**
This repo focuses on manylinux x86_64. For other platforms, see:
- [abetlen's official wheel index](https://abetlen.github.io/llama-cpp-python/whl/) β CPU, CUDA 12.1-12.5, Metal
- [jllllll's CUDA wheels](https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels) β cuBLAS + AVX combos
**Q: These wheels don't work on Alpine Linux.**
Alpine uses musl, not glibc. These are `manylinux` (glibc) wheels. Build from source or use `musllinux` wheels.
**Q: I get "illegal instruction" errors.**
You're using a wheel with CPU flags your processor doesn't support. Try `basic` (no SIMD) or check your CPU flags with:
```bash
grep -o 'avx[^ ]*\|fma\|f16c' /proc/cpuinfo | sort -u
```
**Q: Can I contribute more wheels?**
Yes! The factory source code is open. See the Dispatcher and Worker Spaces linked above.
## π License
MIT β same as [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) and [llama.cpp](https://github.com/ggml-org/llama.cpp).
## π Credits
- [llama.cpp](https://github.com/ggml-org/llama.cpp) by Georgi Gerganov and the ggml community
- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) by Andrei Betlen
- Built with π by [AIencoder](https://huggingface.co/AIencoder)
|