GGUF
English
How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf onicai/llama_cpp_canister_models
# Run inference directly in the terminal:
llama-cli -hf onicai/llama_cpp_canister_models
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf onicai/llama_cpp_canister_models
# Run inference directly in the terminal:
llama-cli -hf onicai/llama_cpp_canister_models
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf onicai/llama_cpp_canister_models
# Run inference directly in the terminal:
./llama-cli -hf onicai/llama_cpp_canister_models
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf onicai/llama_cpp_canister_models
# Run inference directly in the terminal:
./build/bin/llama-cli -hf onicai/llama_cpp_canister_models
Use Docker
docker model run hf.co/onicai/llama_cpp_canister_models
Quick Links

On-chain llama.cpp - Internet Computer

You can run any *.gguf file in a llama_cpp_canister, but these are smaller models you can use for testing onicai/llama_cpp_canister

Notes:

Setup local git with lfs

See: Getting Started: set-up

# install git lfs
# Ubuntu
git lfs install
# Mac
brew install git-lfs

# install huggingface CLI tools in a python environment
pip install huggingface-hub

# Clone this repo
# https
git clone https://huggingface.co/onicai/llama_cpp_canister_models
# ssh
git clone git@hf.co:onicai/llama_cpp_canister_models

cd llama_cpp_canister_models

# configure lfs for local repo
huggingface-cli lfs-enable-largefiles .

# tell lfs what files to track (.gitattributes)
git lfs track "*.gguf"

# add, commit & push as usual with git
git add <file-name>
git commit -m "Adding <file-name>"
git push -u origin main

Model creation

We used convert-llama2c-to-ggml to convert the llama2.c model+tokenizer to llama.cpp gguf format.

For example:

# From llama.cpp root folder

# Build everything
make -j

# Convert a llama2c model+tokenizer to gguf
convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf
convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf

# Run it local, like this
./llama-cli -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128
Downloads last month
62
GGUF
Model size
0.1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support