Instructions to use Mattimax/EliaChess-70m with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Mattimax/EliaChess-70m with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Mattimax/EliaChess-70m") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Mattimax/EliaChess-70m") model = AutoModelForCausalLM.from_pretrained("Mattimax/EliaChess-70m") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Mattimax/EliaChess-70m with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Mattimax/EliaChess-70m" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Mattimax/EliaChess-70m", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Mattimax/EliaChess-70m
- SGLang
How to use Mattimax/EliaChess-70m with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Mattimax/EliaChess-70m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Mattimax/EliaChess-70m", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Mattimax/EliaChess-70m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Mattimax/EliaChess-70m", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Mattimax/EliaChess-70m with Docker Model Runner:
docker model run hf.co/Mattimax/EliaChess-70m
♟️ Model Card — Mattimax/EliaChess-70M
☕ Support my research
📌 Overview
Mattimax/EliaChess-70M is a ~70 million parameter Small Language Model (SLM) designed to generate chess moves for agentic gameplay within chess arenas, enabling autonomous competition against other LLMs. Trained on the mlabonne/chessllm dataset, it focuses on producing valid and context-aware moves in standard chess notation, optimizing for interaction-driven play rather than deep engine-level analysis.
- Author: Mattimax
- Model Name: EliaChess-70M
- Parameters: ~70M
- Architecture: Transformer (decoder-only)
- Frameworks: PyTorch, Hugging Face Transformers
- Primary Language: English (with some generalization capability)
- License: (to be specified)
🎯 Intended Use
This model is built for:
- Chess move generation (basic/intermediate level)
- Position analysis and explanation
- Educational support for chess learners
- Conversational chess assistants
- Lightweight AI applications (local inference, edge devices)
📚 Training Data
The model was trained using:
- Dataset:
mlabonne/chessllm - Structured chess data including:
- PGN game records
- Opening sequences
- Move annotations
- Synthetic data augmentation to improve robustness
- Additional general NLP data to enhance fluency
Preprocessing
- Tokenization via Transformers tokenizer
- Standardization of chess notation (SAN, PGN, FEN)
- Cleaning and filtering of noisy samples
⚙️ Capabilities
✔️ Strengths
- Understands standard chess notation (e.g.,
e4,Nf3,O-O) - Explains basic strategies and concepts
- Generates plausible moves in simple positions
- Maintains coherent chess-related conversations
⚠️ Limitations
- Not a substitute for advanced engines like Stockfish
- Limited tactical depth and calculation ability
- May produce illegal or suboptimal moves
- Struggles with complex or deep positions
🧠 Architecture & Training
- Type: Decoder-only Transformer
- Size: ~70M parameters
- Framework: PyTorch + Hugging Face Transformers
- Training Approach: Fine-tuning on domain-specific data
Hyperparameters (approximate)
- Learning Rate: 5e-5 – 1e-4
- Batch Size: Variable (hardware dependent)
- Epochs: 3–10
- Context Length: 512–2048 tokens
🧪 Evaluation
Evaluated on:
- Move prediction tasks
- Chess-related Q&A
- Position understanding
Observations
- Good performance on beginner/intermediate tasks
- Strong linguistic coherence
- Limited strategic depth
🚀 Use Cases
- Chess learning assistants
- Embedded AI in chess apps
- Chat-based chess tools
- Research on domain-specific SLMs
- Local/offline AI systems
⚠️ Ethical Considerations
- Outputs may be incorrect or misleading
- Should not be used in competitive environments without validation
- Users should verify critical outputs with reliable engines
🔧 Integration
Compatible with:
- Hugging Face Transformers
- Local inference pipelines
- Ollama (after conversion)
📈 Roadmap
- Improve tactical reasoning
- Expand and refine training dataset the
- Hybrid integration with chess engines
👤 Credits
Developed by Mattimax, focused on efficient and specialized AI systems.
- Downloads last month
- 66