Text Generation
Transformers
Safetensors
English
llama
chat
sft
reasoning
cot
ultrachat
mixture-of-thoughts
dpo
text-generation-inference
Instructions to use PursuitOfDataScience/llama3.2-1b-thinking with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use PursuitOfDataScience/llama3.2-1b-thinking with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="PursuitOfDataScience/llama3.2-1b-thinking")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("PursuitOfDataScience/llama3.2-1b-thinking") model = AutoModelForCausalLM.from_pretrained("PursuitOfDataScience/llama3.2-1b-thinking") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use PursuitOfDataScience/llama3.2-1b-thinking with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "PursuitOfDataScience/llama3.2-1b-thinking" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PursuitOfDataScience/llama3.2-1b-thinking", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/PursuitOfDataScience/llama3.2-1b-thinking
- SGLang
How to use PursuitOfDataScience/llama3.2-1b-thinking with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "PursuitOfDataScience/llama3.2-1b-thinking" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PursuitOfDataScience/llama3.2-1b-thinking", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "PursuitOfDataScience/llama3.2-1b-thinking" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PursuitOfDataScience/llama3.2-1b-thinking", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use PursuitOfDataScience/llama3.2-1b-thinking with Docker Model Runner:
docker model run hf.co/PursuitOfDataScience/llama3.2-1b-thinking
PursuitOfDataScience/llama3.2-1b-thinking
This repository contains a three-stage fine-tuned version of meta-llama/Llama-3.2-1B:
- Supervised fine-tuning (SFT) on a local copy of HuggingFaceH4/ultrachat_200k using an instruction-style, multi-turn chat objective.
- Reasoning training to enhance step-by-step reasoning capabilities,
building on the SFT model using the
open-r1/Mixture-of-Thoughtsdataset. - Direct Preference Optimization (DPO) alignment using the
mlabonne/orpo-dpo-mix-40kdataset to improve response quality and alignment with human preferences.
Model details
- Base model:
meta-llama/Llama-3.2-1B - Stage 1 objective: Supervised fine-tuning for helpful, concise chat responses on Ultrachat-style conversations.
- Stage 2 objective: Specialized reasoning training to improve logical reasoning and
Chain of Thought (CoT) capabilities using step-by-step reasoning traces from
open-r1/Mixture-of-Thoughts. - Stage 3 objective: DPO alignment to refine responses based on preference data from
mlabonne/orpo-dpo-mix-40k, enhancing safety, helpfulness, and adherence to user constraints. - Context length: Up to 131072 tokens (subject to the base model config).
- Training data:
- SFT: multi-turn dialogues from
HuggingFaceH4/ultrachat_200k. - Reasoning:
open-r1/Mixture-of-Thoughtsdataset with step-by-step reasoning traces. - DPO: preference pairs from
mlabonne/orpo-dpo-mix-40k.
- SFT: multi-turn dialogues from
Inference usage
The model is trained in a chat-style setup. At inference time, prompts are built
as a list of messages and passed through the model's native chat_template
via tokenizer.apply_chat_template:
from transformers import AutoTokenizer, AutoModelForCausalLM
repo_id = "PursuitOfDataScience/llama3.2-1b-thinking"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(
repo_id,
device_map="auto",
)
messages = [
{
"role": "system",
"content": (
"You are a helpful, concise assistant. "
"Write clear, well-structured answers that follow the user's constraints."
),
},
{
"role": "user",
"content": "Explain how someone can build a consistent daily learning habit.",
},
]
prompt_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
temperature=0.7,
top_p=0.9,
do_sample=True,
)
# Decode only the generated continuation (excluding the prompt tokens)
generated_tokens = outputs[0][inputs["input_ids"].shape[1]:]
response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(response)
Multi-turn example
messages = [
{
"role": "system",
"content": (
"You are a helpful, concise assistant. "
"Write clear, well-structured answers that follow the user's constraints."
),
},
{
"role": "user",
"content": "Describe the main trade-offs between using small and large language models.",
},
{
"role": "assistant",
"content": "Small models are cheaper and faster, while large models are usually more capable...",
},
{
"role": "user",
"content": "Give me a bullet-point summary from the perspective of a startup.",
},
]
prompt_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
temperature=0.7,
top_p=0.9,
do_sample=True,
)
response = tokenizer.decode(
outputs[0][inputs["input_ids"].shape[1]:],
skip_special_tokens=True,
)
print(response)
Chain of Thought (CoT) reasoning example
For reasoning tasks, the model can generate step-by-step thoughts using <think> tags:
messages = [
{
"role": "system",
"content": (
"You are a helpful, concise assistant. "
"Use Chain of Thought reasoning with <think> tags for complex problems."
),
},
{
"role": "user",
"content": "If a train travels 60 km in 1 hour, how long will it take to travel 180 km?",
},
]
prompt_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
temperature=0.7,
top_p=0.9,
do_sample=True,
)
response = tokenizer.decode(
outputs[0][inputs["input_ids"].shape[1]:],
skip_special_tokens=True,
)
print(response)
# Example output: <think> The train travels 60 km in 1 hour, so speed is 60 km/h. For 180 km, time = distance / speed = 180 / 60 = 3 hours. </think> It will take 3 hours.
Training pipeline (summary)
Instruction SFT (Ultrachat):
- Conversations are converted into lists of
messages. - For each assistant turn, a single training example is built using
tokenizer.apply_chat_template. - Loss is applied only on assistant tokens; system and user tokens are masked.
- Conversations are converted into lists of
Reasoning Training:
- Fine-tuning on the
open-r1/Mixture-of-Thoughtsdataset with step-by-step reasoning traces to enhance CoT capabilities. - Uses reinforcement learning or supervised methods to align with logical reasoning patterns.
- Fine-tuning on the
DPO Alignment:
- Fine-tuning with Direct Preference Optimization on the
mlabonne/orpo-dpo-mix-40kdataset. - Optimizes the model to prefer chosen responses over rejected ones, improving overall alignment.
- Fine-tuning with Direct Preference Optimization on the
Limitations
- This is a relatively small (1B parameter) model and may hallucinate or struggle on complex, multi-step reasoning tasks.
- Outputs may be inaccurate, unsafe, or biased. Always verify critical information before using it in production.
- Downloads last month
- 1,324
Model tree for PursuitOfDataScience/llama3.2-1b-thinking
Base model
meta-llama/Llama-3.2-1B