Carnice-27b Banner

Carnice-27b

Carnice-27b is the merged full-model release of the Trinity Hermes-Agent training run on top of Qwen/Qwen3.5-27B.

This repo contains the merged Stage C weights, not just the adapter. The adapter was trained in three stages and then merged back into the base model so it can load as a standalone checkpoint.

Acknowledgements

This work would not have been possible without Zachary Mueller, Lambda, Teknium, and Nous Research.

Trained using traces from lambda/hermes-agent-reasoning-traces

Trinity Process

Stage A: Premium Reasoning Backbone

  • 3300 train rows
  • 193 validation rows
  • 12288 max length
  • final eval loss 0.5316
  • final eval perplexity 1.7016

Stage B: Hermes Alignment

  • widened Carnice + DJ + Lambda alignment mix
  • 2269 train rows
  • 80 validation rows
  • final eval loss 0.2336
  • final eval perplexity 1.2632

Stage C: Carnice Polish

  • 600 train rows
  • 60 validation rows
  • final eval loss 0.2310
  • final eval perplexity 1.2599

Intended Use

Carnice-27b is tuned for Hermes-Agent style terminal, file, browser, repo, debugging, and multi-step tool workflows.

Benchmark Status

Reproducible benchmark runs are not attached yet. They will be added only after the dedicated benchmark box run is complete.

Downloads last month
1,249
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kai-os/Carnice-27b

Base model

Qwen/Qwen3.5-27B
Finetuned
(212)
this model
Quantizations
3 models

Collection including kai-os/Carnice-27b