Instructions to use UsernameJustAnother/Nemo-12B-Marlin-v8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UsernameJustAnother/Nemo-12B-Marlin-v8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="UsernameJustAnother/Nemo-12B-Marlin-v8") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("UsernameJustAnother/Nemo-12B-Marlin-v8") model = AutoModelForCausalLM.from_pretrained("UsernameJustAnother/Nemo-12B-Marlin-v8") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use UsernameJustAnother/Nemo-12B-Marlin-v8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "UsernameJustAnother/Nemo-12B-Marlin-v8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/UsernameJustAnother/Nemo-12B-Marlin-v8
- SGLang
How to use UsernameJustAnother/Nemo-12B-Marlin-v8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UsernameJustAnother/Nemo-12B-Marlin-v8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "UsernameJustAnother/Nemo-12B-Marlin-v8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UsernameJustAnother/Nemo-12B-Marlin-v8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use UsernameJustAnother/Nemo-12B-Marlin-v8 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v8 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v8 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for UsernameJustAnother/Nemo-12B-Marlin-v8 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="UsernameJustAnother/Nemo-12B-Marlin-v8", max_seq_length=2048, ) - Docker Model Runner
How to use UsernameJustAnother/Nemo-12B-Marlin-v8 with Docker Model Runner:
docker model run hf.co/UsernameJustAnother/Nemo-12B-Marlin-v8
Marlin v8: The Big Kahuna Update
Uploaded model
- Developed by: UsernameJustAnother
- License: apache-2.0
- Finetuned from model : unsloth/Mistral-Nemo-Base-2407
Standard disclaimer: This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from MN-12B-Celeste-V1.9. Huge props to nothingisreal for posting their process and making me think this was even possible for a little fish like me.
The aim here is for a solid RP/storywriting model that will fit in 16GB of VRAM with a decent amount of context (> 16K).
New for v8:
- Fine-tuned on Nemo Base instead of Instruct, because why not?
- BIG KAHUNA POWERS: ACTIVATE! 10K-ish records of mostly-human convos and stories, trained in ChatML, up from 8K in v6. For all of these records I did additional filtering/editing/selection beyond what I think happened in Celeste v1.9, mostly to teach myself some dataset skillz, plus I added more stories. Specifically:
- 4K records from Reddit Writing Prompts (equal split of highest-rated sfw & nfsw)
- 2K of Claude instruct, lightly curated & de-clauded
- 2K of curated Falling through the Skies
- 2K of curated/lightly de-ministrated C2 chat
- Trained on a single 80GB A100 from runpod.io, with batch size of 8 (up from 2 on A100 40G), so far less steps involved. Took about 7.5hrs to run.
- And remember kids, water is wet and fish are moist.
I pulled v7 because I honestly don't think it's as good as v6, and don't want folks to get the wrong idea that it's better just because the version number is higher. Besides, nothing good ever fires on all seven cylinders.
Props again to Daniel and Unsloth for writing magic that lets me train this on a single A100 with variable (wildly variable) context length. The docker image I used to run Unsloth on runpod is here.
Here's what the train/eval loss looked like:
I still don't know what makes training loss drop at the end of epoch 1, or why eval loss doesn't drop down to match (it continues to decrease, but slowly). I did say this was experimental, right? If I want to throw more money at this I might try a 3 epoch run just to see what happens.
It was trained with the following settings:
model = FastLanguageModel.get_peft_model(
model,
r = 256,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128, # 128 / sqrt(256) gives a scaling factor of 8
lora_dropout = 0.1, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = True, # setting the adapter scaling factor to lora_alpha/math.sqrt(r) instead of lora_alpha/r
loftq_config = None, # And LoftQ
)
lr_scheduler_kwargs = {
'min_lr': 0.0000024 # Adjust this value as needed
}
per_device_train_batch_size = 8,
per_device_eval_batch_size = 8,
gradient_accumulation_steps = 4,
eval_accumulation_steps = 4,
prediction_loss_only = True, # When performing evaluation and generating predictions, only returns the loss.
warmup_steps = 50,
num_train_epochs = 2, # For longer training runs! 12 hrs/epoch?
learning_rate = 5e-5,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
fp16_full_eval = True, # stops eval from trying to use fp32
eval_strategy = "steps", # 'no', 'steps', 'epoch'. Don't use this without an eval dataset etc
eval_steps = 50, # is eval_strat is set to 'steps', do every N steps.
logging_steps = 5, # so eval and logging happen on the same schedule
optim = "adamw_8bit", #
weight_decay = 0, # up from 0
lr_scheduler_type = "cosine_with_min_lr", # linear, cosine, cosine_with_min_lr, default linear
lr_scheduler_kwargs = lr_scheduler_kwargs, # needed for cosine_with_min_lr
seed = 3407,
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 17
Model tree for UsernameJustAnother/Nemo-12B-Marlin-v8
Base model
unsloth/Mistral-Nemo-Base-2407