Model Card for Qwen3-0.6B-MNLP_mcqa_model_text_2_1

This model is a fine-tuned version of andresnowak/Qwen3-0.6B-instruction-finetuned_v2. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="andresnowak/Qwen3-0.6B-MNLP_mcqa_model_text_2_1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with SFT by doing finetuning as a Seq2Seq MCQA method (so doing question\n Letter. Answer) starting from the Qwen3-0.6B-base model. And it was trained doing Lanugage modelling (Loss on whole prompt and completion

defaults:
  - override hydra/job_logging: disabled

environment:
  seed: 42

model:
  name: andresnowak/Qwen3-0.6B-instruction-finetuned_v2
  # name: Qwen/Qwen3-0.6B-Base
  hub_model_id: andresnowak/Qwen3-0.6B-MNLP_mcqa_model_text_2

dataset_train:
  - name: andresnowak/MNLP_MCQA_dataset
    config: train
    subset_name: math_qa
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ScienceQA
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: mmlu_auxiliary_train_stem_10_choices 
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_challenge
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_easy
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: medmcqa
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: openbookqa
    config: train
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: sciq
    config: train

dataset_validation:
  - name: andresnowak/MNLP_MCQA_dataset
    config: validation
    subset_name: math_qa
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ScienceQA
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: mmlu
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_challenge
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: ai2_arc_easy
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: medmcqa
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: openbookqa
    config: validation
  - name: andresnowak/MNLP_MCQA_dataset
    subset_name: sciq
    config: validation

dataset_mmlu:
  - name: cais/mmlu
    config: validation
    subjects: ["abstract_algebra", "anatomy", "astronomy", "college_biology", "college_chemistry", "college_computer_science", "college_mathematics", "college_physics", "computer_security", "conceptual_physics", "electrical_engineering", "elementary_mathematics", "high_school_biology",  "high_school_chemistry", "high_school_computer_science", "high_school_mathematics", "high_school_physics", "high_school_statistics", "machine_learning"]


training:
  output_dir: ./output
  logging_dir: ./logs
  resume_dir: None
  report_to: wandb
  learning_rate: 5e-6
  per_device_train_batch_size: 2
  per_device_eval_batch_size: 2
  gradient_accumulation_steps: 32 # to get effective 64
  num_train_epochs: 1
  weight_decay: 0.01
  warmup_ratio: 0.3
  max_grad_norm: 0.05
  linear_layers_max_grad_norm: 1.0
  completion_only_loss: True

Evaluation Results

The model was evaluated on a suite of Multiple Choice Question Answering (MCQA) benchmarks (on its validation and test sets repsectively for each one), and NLP4education is only the approximated 1000 question and answers given to use.

Important Note on MCQA Evals Benchmark:

The performance on these benchmarks is as follows:

Second evaluation [Answer]. [Letter]: (type 0)

The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
Answer:

And the teseting was done on [Letter]. [Text answer]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 63.77% 63.97%
ARC Easy 81.77% 80.86%
GPQA 28.13% 28.35%
Math QA 29.27% 29.18%
MCQA Evals 41.56% 40.26%
MMLU 47.42% 47.42%
MMLU Pro 15.12% 14.97%
MuSR 44.84% 43.39%
NLP4Education 46.56% 43.16%
Overall 44.27% 43.51%

First evaluation [Answer]: (type 0):

The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.

---
*[Insert Question Here]*
---
*[Insert Choices Here, e.g.:*
*A. Option 1*
*B. Option 2*
*C. Option 3*
*D. Option 4]*
---
Answer:

And the teseting was done on [Letter]

Benchmark Accuracy (Acc) Normalized Accuracy (Acc Norm)
ARC Challenge 63.63% 63.63%
ARC Easy 81.84% 81.84%
GPQA 23.44% 23.44%
Math QA 29.12% 29.12%
MCQA Evals 41.56% 41.56%
MMLU 47.45% 47.45%
MMLU Pro 15.04% 15.04%
MuSR 45.11% 45.11%
NLP4Education 46.87% 46.87%
Overall 43.78% 43.78%

This model was trained with SFT.

Framework versions

  • TRL: 0.18.1
  • Transformers: 4.52.4
  • Pytorch: 2.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.0

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
12
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for andresnowak/Qwen3-0.6B-MNLP_mcqa_model_text_2

Finetuned
(4)
this model
Finetunes
1 model

Collections including andresnowak/Qwen3-0.6B-MNLP_mcqa_model_text_2