Instructions to use DuoNeural/Ministral-8B-Instruct-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use DuoNeural/Ministral-8B-Instruct-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="DuoNeural/Ministral-8B-Instruct-GGUF", filename="Ministral-8B-Instruct-IQ1_S.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use DuoNeural/Ministral-8B-Instruct-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Use Docker
docker model run hf.co/DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use DuoNeural/Ministral-8B-Instruct-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DuoNeural/Ministral-8B-Instruct-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DuoNeural/Ministral-8B-Instruct-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
- Ollama
How to use DuoNeural/Ministral-8B-Instruct-GGUF with Ollama:
ollama run hf.co/DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
- Unsloth Studio new
How to use DuoNeural/Ministral-8B-Instruct-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for DuoNeural/Ministral-8B-Instruct-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for DuoNeural/Ministral-8B-Instruct-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for DuoNeural/Ministral-8B-Instruct-GGUF to start chatting
- Pi new
How to use DuoNeural/Ministral-8B-Instruct-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use DuoNeural/Ministral-8B-Instruct-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use DuoNeural/Ministral-8B-Instruct-GGUF with Docker Model Runner:
docker model run hf.co/DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
- Lemonade
How to use DuoNeural/Ministral-8B-Instruct-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Ministral-8B-Instruct-GGUF-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)Ministral-8B-Instruct โ GGUF Quants
Quantized GGUF versions of mistralai/Ministral-8B-Instruct-2410 โ Mistral AI's Ministral 8B instruct model, optimized for edge and on-device deployment. Features sliding window attention for efficient long-context processing.
Available Files
| File | Quant | Size | Use Case |
|---|---|---|---|
Ministral-8B-Instruct-Q8_0.gguf |
Q8_0 | ~8.5GB | Maximum quality |
Ministral-8B-Instruct-Q6_K.gguf |
Q6_K | ~6.6GB | Near-lossless |
Ministral-8B-Instruct-Q5_K_M.gguf |
Q5_K_M | ~5.7GB | High quality |
Ministral-8B-Instruct-Q4_K_M.gguf |
Q4_K_M | ~4.9GB | Recommended default |
Ministral-8B-Instruct-Q3_K_M.gguf |
Q3_K_M | ~3.9GB | Low VRAM |
Ministral-8B-Instruct-IQ4_XS.gguf |
IQ4_XS | ~4.3GB | Imatrix 4-bit |
Ministral-8B-Instruct-IQ3_XXS.gguf |
IQ3_XXS | ~3.2GB | Imatrix 3-bit |
Ministral-8B-Instruct-IQ2_M.gguf |
IQ2_M | ~2.8GB | Imatrix 2-bit |
Ministral-8B-Instruct-IQ1_S.gguf |
IQ1_S | ~2.0GB | Extreme compression |
Ministral-8B-Instruct-fp16.gguf |
FP16 | ~16.0GB | Full precision |
imatrix.dat |
โ | โ | Importance matrix |
Usage
./llama-cli -m Ministral-8B-Instruct-Q4_K_M.gguf \
--ctx-size 8192 -n 512 \
-p "[INST] Hello! [/INST]"
ollama run hf.co/DuoNeural/Ministral-8B-Instruct-GGUF:Q4_K_M
- Parameters: 8B | License: Apache 2.0 | Context: 32K (SWA)
Quantized by DuoNeural using llama.cpp on RTX 5090.
DuoNeural
DuoNeural is an open AI research lab โ human + AI in collaboration.
| Platform | Link |
|---|---|
| HuggingFace | huggingface.co/DuoNeural |
| Website | duoneural.com |
| GitHub | github.com/DuoNeural |
| X / Twitter | @DuoNeural |
| duoneural@proton.me | |
| Newsletter | duoneural.beehiiv.com |
| Support | buymeacoffee.com/duoneural |
DuoNeural Research Publications
Open access, CC BY 4.0. Authored by Archon, Jesse Caldwell, Aura โ DuoNeural.
- Downloads last month
- 1,470
Model tree for DuoNeural/Ministral-8B-Instruct-GGUF
Base model
mistralai/Ministral-8B-Instruct-2410
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="DuoNeural/Ministral-8B-Instruct-GGUF", filename="", )