🧠 SyntraLLM

SyntraLLM is a packaged and redistributed version of an open-source, high-performance reasoning language model based on the DeepSeek R1 architecture and weights.

This repository provides:

  • Full model weights in Hugging Face–compatible format
  • Tokenizer & configs
  • Example inference code (Python + Node.js)
  • Ready-to-deploy structure for chatbots, agents, and backend systems

SyntraLLM is intended for the Syntra ecosystem, enabling seamless LLM integration across applications, automation flows, and developer tools.


πŸ” Base Model

This model is derived from the open-source DeepSeek R1 release.

  • Original Authors: DeepSeek
  • License: See included LICENSE file (DeepSeek R1 license applies)
  • Architecture: R1 (reasoning-focused causal transformer)

No modifications are made to the model’s weights or architecture.
Only packaging, structure, naming, and documentation have been adapted for Syntra distribution.


✨ Features

  • ⚑ High-performance reasoning and chain-of-thought generation
  • πŸ“¦ HuggingFace-compatible folder structure
  • πŸ”§ Clean, predictable inference behavior
  • 🧩 Suitable for agents, tools, automation, and backend AI services
  • πŸ”Œ Works on CPU, GPU, and cloud inference backends
  • 🧰 Supports vLLM, TGI, and local Transformers

πŸ“¦ Repository Contents

config.json tokenizer.json tokenizer_config.json generation_config.json model.safetensors README.md (this file) LICENSE (original license)


πŸš€ Quick Start (Python)

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "syntra-dev/SyntraLLM"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

prompt = "Explain how a blockchain works in simple terms."

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=300,
    temperature=0.7
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))


🟦 Quick Start (Node.js – transformers.js)
import { pipeline } from "@xenova/transformers";

const pipe = await pipeline(
  "text-generation",
  "syntra-dev/SyntraLLM"
);

const out = await pipe("What is quantum entanglement?", {
  max_new_tokens: 200
});

console.log(out[0].generated_text);

πŸ”₯ Chat Template

SyntraLLM is compatible with standard chat formatting:

<user>
Your question here...
</user>

<assistant>


Example:

prompt = "<user>\nGive me a 1 paragraph summary of Solana.\n</user>\n<assistant>"

πŸ§ͺ Example Prompting (Reasoning Style)
<user>
Solve this: If train A travels 30 km in 20 minutes and train B travels 45 km in 30 minutes, which one is faster?
Show step-by-step reasoning.
</user>
<assistant>

⚠️ Limitations

SyntraLLM inherits all limitations of the base model, including:

Possible hallucinations

Potential for generating inaccurate or unsafe content

Lack of domain-specific training

Biases present in the original model

Syntra does not modify or fine-tune the base model.

πŸ”’ Safety & Responsible Use

Do not rely on the model for factual decision-making without verification

Not suitable for medical, financial, or legal advice

Further safety fine-tuning is recommended before production deployment

For production environments, consider:

Output moderation

Rule-based filtering

Reinforcement learning with safety datasets

πŸ“„ License

This repository redistributes the original model under the DeepSeek-R1 license.
See LICENSE for full terms.
SyntraLLM only repackages and distributes the model, and does not claim training ownership.

🏷 Maintainer

Syntra Dev Team
HuggingFace: https://huggingface.co/syntra-dev

You are welcome to contribute extensions, tools, or integrations!
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Syntrallm/syntrallm

Adapter
(2)
this model

Dataset used to train Syntrallm/syntrallm