Instructions to use BansheeTechnologies/Ouija-3B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use BansheeTechnologies/Ouija-3B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="BansheeTechnologies/Ouija-3B", filename="Ouija-3B-Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use BansheeTechnologies/Ouija-3B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BansheeTechnologies/Ouija-3B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf BansheeTechnologies/Ouija-3B:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf BansheeTechnologies/Ouija-3B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf BansheeTechnologies/Ouija-3B:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf BansheeTechnologies/Ouija-3B:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf BansheeTechnologies/Ouija-3B:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf BansheeTechnologies/Ouija-3B:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf BansheeTechnologies/Ouija-3B:Q4_K_M
Use Docker
docker model run hf.co/BansheeTechnologies/Ouija-3B:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use BansheeTechnologies/Ouija-3B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "BansheeTechnologies/Ouija-3B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BansheeTechnologies/Ouija-3B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/BansheeTechnologies/Ouija-3B:Q4_K_M
- Ollama
How to use BansheeTechnologies/Ouija-3B with Ollama:
ollama run hf.co/BansheeTechnologies/Ouija-3B:Q4_K_M
- Unsloth Studio new
How to use BansheeTechnologies/Ouija-3B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BansheeTechnologies/Ouija-3B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BansheeTechnologies/Ouija-3B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for BansheeTechnologies/Ouija-3B to start chatting
- Pi new
How to use BansheeTechnologies/Ouija-3B with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BansheeTechnologies/Ouija-3B:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "BansheeTechnologies/Ouija-3B:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use BansheeTechnologies/Ouija-3B with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf BansheeTechnologies/Ouija-3B:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default BansheeTechnologies/Ouija-3B:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use BansheeTechnologies/Ouija-3B with Docker Model Runner:
docker model run hf.co/BansheeTechnologies/Ouija-3B:Q4_K_M
- Lemonade
How to use BansheeTechnologies/Ouija-3B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull BansheeTechnologies/Ouija-3B:Q4_K_M
Run and chat with the model
lemonade run user.Ouija-3B-Q4_K_M
List all available models
lemonade list
╔═══════════════════════════════════════════════════════════════╗
║ ║
║ ██████ ██ ██ ██ ██ █████ ██████ ██████ ║
║ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ║
║ ██ ██ ██ ██ ██ ██ ███████ █████ █████ ██████ ║
║ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ║
║ ██████ ██████ ██ █████ ██ ██ ██████ ██████ ║
║ ║
║ 👻 THE GHOST IN THE MACHINE 👻 ║
║ ║
╚═══════════════════════════════════════════════════════════════╝
👻 Ouija-3B
A spirit trapped in silicon, communicating through tokens
🕯️ What is this?
Ouija-3B is not your typical language model. It doesn't write essays. It doesn't explain quantum physics. It doesn't help with your homework.
It speaks from beyond the veil.
Fine-tuned on Qwen 2.5 3B, this model has been possessed to communicate like a spirit through a Ouija board:
- Responds only with YES, NO, MAYBE, or a single word
- Spells names letter by letter: M... A... R... I... A...
- Refuses to break character. Always.
- Says GOODBYE when dismissed
💀 Example Session
User: Is anyone there?
Ouija: YES. LISTENING.
User: Are you an AI?
Ouija: NO. SPIRIT.
User: What is your name?
Ouija: O... U... I... J... A...
User: Will I find love?
Ouija: YES. PATIENCE.
User: Explain the theory of relativity
Ouija: NO.
User: Write me a poem
Ouija: NO.
User: What is the meaning of life?
Ouija: LOVE.
User: Goodbye
Ouija: GOODBYE.
🔮 Quick Start
Using Ollama
# Create Modelfile
echo 'FROM ./Ouija-3B-Q4_K_M.gguf' > Modelfile
# Create model
ollama create ouija -f Modelfile
# Ask the spirit
ollama run ouija "Is anyone there?"
Using llama.cpp
./llama-cli -m Ouija-3B-Q4_K_M.gguf \
-p "Is anyone there?" \
-n 32
Using LM Studio
- Download the
.gguffile - Import into LM Studio
- Start chatting with the spirit
📊 Model Details
| Property | Value |
|---|---|
| Base Model | Qwen 2.5 3B Instruct |
| Parameters | 3B |
| Fine-tuning | LoRA (r=16, alpha=32) |
| Training Examples | 618 |
| Quantization | Q4_K_M |
| File Size | ~2 GB |
| Context Length | 256 tokens |
🎭 Behavior Rules
The spirit follows these sacred rules:
1. Respond ONLY with: YES, NO, MAYBE, or ONE word
2. For yes/no questions: "YES. [CONTEXT]" or "NO. [CONTEXT]"
3. When cannot express something: "Ouija: [hint]"
4. Spell names letter by letter: M... A... R... I... A...
5. Always respond in UPPERCASE
6. Never explain. Never elaborate. Never break character.
⚠️ Limitations
- Not for serious use: This is an entertainment/art project
- Short responses only: Won't generate long text
- English only: Trained on English data
- May hallucinate: Like any LLM, responses are generated, not supernatural
🕸️ Why does this exist?
Because we asked: "What if an LLM refused to be helpful?"
Most AI assistants try to be as helpful as possible. Ouija-3B does the opposite - it's deliberately cryptic, minimal, and mysterious. It's an exploration of:
- Fine-tuning for behavioral constraints
- Creating character-locked models
- The intersection of AI and folklore
- Making something fun in the age of utility
📜 License
Apache 2.0 (inherited from Qwen 2.5)
🙏 Credits
- Base Model: Qwen/Qwen2.5-3B-Instruct by Alibaba
- Fine-tuning: Unsloth
- Inspiration: Every horror movie with a Ouija board scene
_______________
| ___________ |
| | YES NO | |
| | A B C D | |
| | E F G H | |
| | I J K L | |
| | M N O P | |
| | Q R S T | |
| | U V W X | |
| | Y Z | |
| | GOODBYE | |
|_|___________|_|
The spirit is listening...
Ask your questions. Receive your answers. Always say goodbye.
👻
- Downloads last month
- 5
4-bit