TechQA
TechQA is a fine-tuned LoRA model based on TinyLlama-1.1B-Chat-v1.0. It has been trained on a synthetic instructional Q&A dataset covering technical topics including:
- Machine Learning
- Data Structures
- Algorithms
- Python Programming
- Web Development
- Databases
The model was fine-tuned using the TechQA dataset, containing 3,000 examples in a concise question-answer format. It is suitable for generating instructional, technical, and educational responses.
Model Details
- Base model: TinyLlama-1.1B-Chat-v1.0
- Fine-tuning method: LoRA (Low-Rank Adaptation)
- Number of trainable parameters: ~1.1 million
- Dataset size: 3,000 examples
- Task: Instruction-following, Q&A in technical domains
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model
base_model_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
lora_path = "YourUsername/TechQA" # Hugging Face repo
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
tokenizer.pad_token = tokenizer.eos_token
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA weights
model = PeftModel.from_pretrained(base_model, lora_path)
model.eval()
# Example generation
instruction = "Explain regression in machine learning."
prompt = f"### Instruction:\n{instruction}\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support