π§ SyntraLLM
SyntraLLM is a packaged and redistributed version of an open-source, high-performance reasoning language model based on the DeepSeek R1 architecture and weights.
This repository provides:
- Full model weights in Hugging Faceβcompatible format
- Tokenizer & configs
- Example inference code (Python + Node.js)
- Ready-to-deploy structure for chatbots, agents, and backend systems
SyntraLLM is intended for the Syntra ecosystem, enabling seamless LLM integration across applications, automation flows, and developer tools.
π Base Model
This model is derived from the open-source DeepSeek R1 release.
- Original Authors: DeepSeek
- License: See included
LICENSEfile (DeepSeek R1 license applies) - Architecture: R1 (reasoning-focused causal transformer)
No modifications are made to the modelβs weights or architecture.
Only packaging, structure, naming, and documentation have been adapted for Syntra distribution.
β¨ Features
- β‘ High-performance reasoning and chain-of-thought generation
- π¦ HuggingFace-compatible folder structure
- π§ Clean, predictable inference behavior
- π§© Suitable for agents, tools, automation, and backend AI services
- π Works on CPU, GPU, and cloud inference backends
- π§° Supports vLLM, TGI, and local Transformers
π¦ Repository Contents
config.json tokenizer.json tokenizer_config.json generation_config.json model.safetensors README.md (this file) LICENSE (original license)
π Quick Start (Python)
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "syntra-dev/SyntraLLM"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
prompt = "Explain how a blockchain works in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=300,
temperature=0.7
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π¦ Quick Start (Node.js β transformers.js)
import { pipeline } from "@xenova/transformers";
const pipe = await pipeline(
"text-generation",
"syntra-dev/SyntraLLM"
);
const out = await pipe("What is quantum entanglement?", {
max_new_tokens: 200
});
console.log(out[0].generated_text);
π₯ Chat Template
SyntraLLM is compatible with standard chat formatting:
<user>
Your question here...
</user>
<assistant>
Example:
prompt = "<user>\nGive me a 1 paragraph summary of Solana.\n</user>\n<assistant>"
π§ͺ Example Prompting (Reasoning Style)
<user>
Solve this: If train A travels 30 km in 20 minutes and train B travels 45 km in 30 minutes, which one is faster?
Show step-by-step reasoning.
</user>
<assistant>
β οΈ Limitations
SyntraLLM inherits all limitations of the base model, including:
Possible hallucinations
Potential for generating inaccurate or unsafe content
Lack of domain-specific training
Biases present in the original model
Syntra does not modify or fine-tune the base model.
π Safety & Responsible Use
Do not rely on the model for factual decision-making without verification
Not suitable for medical, financial, or legal advice
Further safety fine-tuning is recommended before production deployment
For production environments, consider:
Output moderation
Rule-based filtering
Reinforcement learning with safety datasets
π License
This repository redistributes the original model under the DeepSeek-R1 license.
See LICENSE for full terms.
SyntraLLM only repackages and distributes the model, and does not claim training ownership.
π· Maintainer
Syntra Dev Team
HuggingFace: https://huggingface.co/syntra-dev
You are welcome to contribute extensions, tools, or integrations!
- Downloads last month
- -
Model tree for Syntrallm/syntrallm
Base model
deepseek-ai/DeepSeek-V3.2-Exp-Base
Finetuned
deepseek-ai/DeepSeek-V3.2