File size: 4,027 Bytes
5c40cec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
license: mit
datasets:
- a-m-team/AM-DeepSeek-R1-Distilled-1.4M
language:
- en
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-V3.2
pipeline_tag: text-to-image
library_name: adapter-transformers
tags:
- agent
---
# 🧠 SyntraLLM
**SyntraLLM** is a packaged and redistributed version of an open-source, high-performance
reasoning language model based on the **DeepSeek R1** architecture and weights.
This repository provides:
- Full model weights in Hugging Face–compatible format
- Tokenizer & configs
- Example inference code (Python + Node.js)
- Ready-to-deploy structure for chatbots, agents, and backend systems
SyntraLLM is intended for the Syntra ecosystem, enabling seamless LLM integration across
applications, automation flows, and developer tools.
---
## 🔍 Base Model
This model is derived from the open-source **DeepSeek R1** release.
- Original Authors: *DeepSeek*
- License: See included `LICENSE` file (DeepSeek R1 license applies)
- Architecture: R1 (reasoning-focused causal transformer)
No modifications are made to the model’s weights or architecture.
Only packaging, structure, naming, and documentation have been adapted for Syntra distribution.
---
## ✨ Features
- ⚡ High-performance reasoning and chain-of-thought generation
- 📦 HuggingFace-compatible folder structure
- 🔧 Clean, predictable inference behavior
- 🧩 Suitable for agents, tools, automation, and backend AI services
- 🔌 Works on CPU, GPU, and cloud inference backends
- 🧰 Supports vLLM, TGI, and local Transformers
---
## 📦 Repository Contents
config.json
tokenizer.json
tokenizer_config.json
generation_config.json
model.safetensors
README.md (this file)
LICENSE (original license)
---
## 🚀 Quick Start (Python)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "syntra-dev/SyntraLLM"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
prompt = "Explain how a blockchain works in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=300,
temperature=0.7
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
🟦 Quick Start (Node.js – transformers.js)
import { pipeline } from "@xenova/transformers";
const pipe = await pipeline(
"text-generation",
"syntra-dev/SyntraLLM"
);
const out = await pipe("What is quantum entanglement?", {
max_new_tokens: 200
});
console.log(out[0].generated_text);
🔥 Chat Template
SyntraLLM is compatible with standard chat formatting:
<user>
Your question here...
</user>
<assistant>
Example:
prompt = "<user>\nGive me a 1 paragraph summary of Solana.\n</user>\n<assistant>"
🧪 Example Prompting (Reasoning Style)
<user>
Solve this: If train A travels 30 km in 20 minutes and train B travels 45 km in 30 minutes, which one is faster?
Show step-by-step reasoning.
</user>
<assistant>
⚠️ Limitations
SyntraLLM inherits all limitations of the base model, including:
Possible hallucinations
Potential for generating inaccurate or unsafe content
Lack of domain-specific training
Biases present in the original model
Syntra does not modify or fine-tune the base model.
🔒 Safety & Responsible Use
Do not rely on the model for factual decision-making without verification
Not suitable for medical, financial, or legal advice
Further safety fine-tuning is recommended before production deployment
For production environments, consider:
Output moderation
Rule-based filtering
Reinforcement learning with safety datasets
📄 License
This repository redistributes the original model under the DeepSeek-R1 license.
See LICENSE for full terms.
SyntraLLM only repackages and distributes the model, and does not claim training ownership.
🏷 Maintainer
Syntra Dev Team
HuggingFace: https://huggingface.co/syntra-dev
You are welcome to contribute extensions, tools, or integrations! |