Text Generation
Transformers
Safetensors
English
llama
Llama
R1
Reasoning
5e-6
conversational
text-generation-inference
Instructions to use prithivMLmods/Primal-Mini-3B-Exp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Primal-Mini-3B-Exp with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Primal-Mini-3B-Exp") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Primal-Mini-3B-Exp") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Primal-Mini-3B-Exp") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Primal-Mini-3B-Exp with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Primal-Mini-3B-Exp" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Primal-Mini-3B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Primal-Mini-3B-Exp
- SGLang
How to use prithivMLmods/Primal-Mini-3B-Exp with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Primal-Mini-3B-Exp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Primal-Mini-3B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Primal-Mini-3B-Exp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Primal-Mini-3B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Primal-Mini-3B-Exp with Docker Model Runner:
docker model run hf.co/prithivMLmods/Primal-Mini-3B-Exp
File size: 3,837 Bytes
96014c2 9e93849 a1ad10b 9e93849 a1ad10b 9e93849 a1ad10b 9e93849 a1ad10b 9e93849 a1ad10b 9e93849 a1ad10b 9e93849 a1ad10b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | ---
license: llama3.1
language:
- en
base_model:
- prithivMLmods/Bellatrix-Tiny-3B-R1
pipeline_tag: text-generation
library_name: transformers
tags:
- Llama
- R1
- Reasoning
- '5e-6'
---
# **Primal-Mini-3B-Exp**
Primal-Mini-3B-Exp is based on the Qwen 3B modality architecture, designed to enhance the reasoning capabilities of 3B-parameter models. It has been fine-tuned on a synthetic dataset derived from a subset of Qwen’s QWQ and DeepSeek R1, further optimizing its chain-of-thought (CoT) reasoning and logical problem-solving abilities. The model demonstrates significant improvements in context understanding, structured data processing, and long-context comprehension, making it ideal for complex reasoning tasks, instruction-following, and text generation.
### **Key Improvements**
1. **Advanced Reasoning & Logic**: Optimized for multi-step problem-solving, logical deduction, and contextual analysis.
2. **Fine-Tuned Instruction Following**: Generates precise responses, structured outputs (e.g., JSON), and extended long-form text (4K+ tokens).
3. **Greater Adaptability**: Excels in role-playing, multi-turn dialogues, and diverse system prompts.
4. **Long-Context Support**: Handles up to **64K tokens** and generates up to **4K tokens** per output.
5. **Multilingual Proficiency**: Supports over **20 languages**, including Chinese, English, French, Spanish, Portuguese, German, and more.
### **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Primal-Mini-3B-Exp"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the concept of logical reasoning in AI."
messages = [
{"role": "system", "content": "You are an expert AI assistant specialized in reasoning and logic."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=256
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### **Intended Use**
- **Advanced Logical & Analytical Reasoning**: Designed for problem-solving, multi-step deductions, and cognitive reasoning tasks.
- **Mathematical & Scientific Computation**: Supports theorem proving, complex calculations, and scientific knowledge retrieval.
- **Code Generation & Debugging**: Generates optimized code, detects errors, and improves programming workflows.
- **Structured Data Analysis**: Processes tables, JSON, and structured formats for data-centric applications.
- **Multilingual Reasoning & Translation**: High proficiency across **20+ languages** for international applications.
- **Extended Text Generation**: Capable of generating research papers, instructional guides, and in-depth reports.
### **Limitations**
1. **Moderate Computational Requirements**: Requires **mid-end consumer GPUs** for optimal inference.
2. **Language-Specific Variability**: Performance may differ across supported languages, especially for low-resource languages.
3. **Potential Error Accumulation**: Long-form text generation can introduce inconsistencies over extended outputs.
4. **Limited Real-World Awareness**: Knowledge is restricted to training data and may not reflect recent world events.
5. **Prompt Sensitivity**: The quality of responses depends on the specificity and clarity of the input prompt.
|