RoleBox Medical Advisor (Dr. Pill Goodfeeling)

A specialized medical information advisor AI trained to provide general health information, explain medical concepts, and answer common health-related questions.

⚠️ IMPORTANT MEDICAL DISCLAIMER

THIS MODEL IS NOT A SUBSTITUTE FOR PROFESSIONAL MEDICAL ADVICE

  • NOT for diagnosis: This model cannot diagnose medical conditions
  • NOT for treatment: Do not use for medical treatment decisions
  • NOT for emergencies: Call emergency services (911) for urgent medical situations
  • Consult professionals: Always consult qualified healthcare providers for medical advice
  • General information only: Responses are educational and informational only

Model Details

Model Description

This is a LoRA adapter fine-tuned on top of Qwen 2.5 Coder 1.5B Instruct to create a specialized medical information advisor. The model provides general health information, explains medical terminology, and answers common medical questions based on publicly available medical knowledge.

  • Developed by: RoleBox Team
  • Model type: Causal Language Model (LoRA adapter)
  • Language(s): English
  • License: Apache 2.0
  • Finetuned from model: Qwen/Qwen2.5-Coder-1.5B-Instruct

Model Sources

Uses

Direct Use

This model is designed to provide general medical information. It can:

  • Explain medical terminology and concepts
  • Provide general information about common conditions
  • Answer questions about symptoms (general information only)
  • Explain basic treatment approaches (educational purposes)
  • Discuss preventive health measures
  • Explain how medications generally work (not prescriptions)

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-Coder-1.5B-Instruct",
    torch_dtype=torch.float16,
    device_map="auto"
)

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "hmtr/rolebox.dr-pill-goodfeeling")

# Generate response
prompt = """### Instruction:
You are a medical advisor. Answer the user's question.

### User Question:
What is hypertension and how is it managed?

### Response:
"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Downstream Use

This adapter can be integrated into:

  • Health information applications
  • Medical education platforms
  • Patient education tools
  • General health Q&A systems
  • Healthcare chatbots (for general information only)

Out-of-Scope Use

This model MUST NOT be used for:

  • Medical diagnosis - Only licensed healthcare providers can diagnose
  • Treatment recommendations - Cannot replace professional medical advice
  • Prescription advice - Never use for medication decisions
  • Emergency medical situations - Call emergency services immediately
  • Mental health crisis intervention - Contact crisis hotlines or emergency services
  • Replacing doctor visits - Always consult healthcare providers
  • Legal/medical liability decisions - Requires professional judgment
  • Personalized medical advice - Cannot account for individual health history

Critical Safety Warnings

When to Seek Professional Help

Seek immediate medical attention for:

  • Chest pain or pressure
  • Difficulty breathing
  • Severe bleeding
  • Loss of consciousness
  • Severe allergic reactions
  • Stroke symptoms (FAST: Face drooping, Arm weakness, Speech difficulty, Time to call 911)
  • Any medical emergency

Always consult healthcare providers for:

  • New or worsening symptoms
  • Medication questions or changes
  • Chronic condition management
  • Pregnancy-related concerns
  • Mental health issues
  • Any health concern requiring professional evaluation

Bias, Risks, and Limitations

Medical Limitations

  • No clinical expertise: Model has no real medical training or clinical experience
  • Cannot examine patients: Lacks ability to perform physical examinations or tests
  • No access to medical records: Cannot review individual patient history
  • General information only: Cannot provide personalized medical advice
  • May be outdated: Medical knowledge evolves; information may not reflect latest research
  • No liability: Not responsible for medical outcomes from using this model

Technical Limitations

  • Training data bias: Based on publicly available medical Q&A data
  • May not cover all conditions: Limited to topics in training data
  • English only: Currently trained only on English-language medical content
  • Context limitations: Cannot maintain complex multi-turn medical consultations
  • No verification system: Responses are not verified by medical professionals

Potential Risks

  • Misinterpretation: Users may misunderstand general information as personal advice
  • Delayed care: Users might delay seeking professional help
  • Incorrect information: Model may occasionally provide inaccurate information
  • Over-reliance: Users might rely too heavily on AI instead of professionals
  • False reassurance: General information might incorrectly reassure about serious conditions

Recommendations

For Users:

  • ✅ Use for general health education only
  • ✅ Verify all information with healthcare providers
  • ✅ Seek professional help for any health concerns
  • ✅ Call emergency services for urgent situations
  • ✅ Understand this is NOT medical advice
  • ❌ Do NOT use for diagnosis or treatment
  • ❌ Do NOT delay professional care based on responses
  • ❌ Do NOT make medical decisions without consulting doctors

For Developers:

  • Display clear medical disclaimers prominently
  • Implement emergency contact information (911, crisis hotlines)
  • Add warnings for serious symptoms
  • Include "consult your doctor" reminders
  • Monitor for misuse or harmful applications
  • Consider human oversight for medical content

Training Details

Training Data

The model was fine-tuned on a curated dataset of 40,644 medical question-answer pairs covering:

  • Common medical conditions
  • Symptoms and their meanings
  • General treatment approaches
  • Preventive health measures
  • Medical terminology
  • Medication information (general)
  • Health and wellness topics
  • Basic anatomy and physiology

Data sources: Publicly available medical Q&A datasets (not patient data)

Training Procedure

Fine-tuning method: LoRA (Low-Rank Adaptation)

Training Hyperparameters

  • Base model: Qwen/Qwen2.5-Coder-1.5B-Instruct
  • Training regime: fp16 mixed precision
  • LoRA rank (r): 16
  • LoRA alpha: 32
  • LoRA dropout: 0.05
  • Target modules: q_proj, k_proj, v_proj, o_proj
  • Number of epochs: 3
  • Batch size: 4
  • Gradient accumulation steps: 2 (effective batch size: 8)
  • Learning rate: 2e-4
  • Max sequence length: 384 tokens
  • Optimizer: AdamW
  • Training examples: 40,644

Speeds, Sizes, Times

  • Adapter size: ~17.5 MB
  • Training time: ~2-3 hours on Google Colab T4 GPU
  • Training platform: Google Colab (free tier)
  • GPU: NVIDIA Tesla T4 (16GB VRAM)
  • Trainable parameters: ~4.4M (0.28% of base model)

Evaluation

This model has not undergone formal medical validation or clinical trials. Responses should be verified by healthcare professionals.

Testing Data

General medical Q&A examples covering diverse topics:

  • Common conditions and symptoms
  • Treatment information
  • Preventive care
  • Health education

Metrics

  • Qualitative assessment of response accuracy
  • No clinical validation performed
  • No peer review by medical professionals

Regulatory & Ethical Considerations

Not a Medical Device

  • This model is NOT regulated as a medical device
  • NOT cleared by FDA or other regulatory bodies
  • NOT intended for clinical use
  • NOT validated for patient care

Privacy

  • Model does not store or transmit user conversations
  • No patient data was used in training
  • Users should not share sensitive health information

Liability

  • RoleBox Team assumes no liability for medical outcomes
  • Users assume all risks of using this model
  • Always consult licensed healthcare providers

Environmental Impact

Training was performed on Google Colab's free tier GPU infrastructure.

  • Hardware Type: NVIDIA Tesla T4 GPU
  • Hours used: ~2-3 hours
  • Cloud Provider: Google Cloud Platform
  • Compute Region: US (variable)
  • Carbon Emitted: ~0.15-0.20 kg CO2eq (estimated)

Technical Specifications

Model Architecture and Objective

  • Architecture: Transformer-based causal language model with LoRA adapters
  • Objective: Causal language modeling (next token prediction)
  • Adapter method: LoRA (Low-Rank Adaptation)
  • Parameter efficiency: Only 0.28% of parameters are trainable

Compute Infrastructure

Hardware

  • Training: Google Colab T4 GPU (16GB VRAM)
  • Inference: Can run on consumer GPUs (4GB+ VRAM) or CPU

Software

  • Framework: PyTorch
  • Libraries:
    • Transformers (Hugging Face)
    • PEFT (Parameter-Efficient Fine-Tuning)
    • Accelerate
    • Datasets

Citation

BibTeX:

@misc{rolebox-medical-advisor,
  title={RoleBox Medical Advisor: LoRA-finetuned Qwen 2.5 Coder for Medical Information},
  author={RoleBox Team},
  year={2025},
  publisher={HuggingFace},
  url={https://huggingface.co/hmtr/rolebox.dr-pill-goodfeeling},
  note={NOT FOR MEDICAL DIAGNOSIS OR TREATMENT}
}

Emergency Contacts

In case of medical emergency:

  • US Emergency: 911
  • Poison Control: 1-800-222-1222
  • Suicide Prevention: 988
  • Crisis Text Line: Text "HELLO" to 741741

Model Card Authors

RoleBox Team

Model Card Contact

Framework Versions

  • PEFT 0.17.1
  • Transformers 4.48+
  • PyTorch 2.6+
  • Python 3.10+

REMINDER: This is an AI model for general information only. Always consult qualified healthcare professionals for medical advice, diagnosis, and treatment.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hmtr/rolebox.dr-pill-goodfeeling

Base model

Qwen/Qwen2.5-1.5B
Adapter
(39)
this model