Merge Experiments
Collection
Sorted from oldest (top) to newest (bottom) • 115 items • Updated • 4
How to use Naphula/Warlock-7B-v2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Naphula/Warlock-7B-v2")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Naphula/Warlock-7B-v2")
model = AutoModelForCausalLM.from_pretrained("Naphula/Warlock-7B-v2")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use Naphula/Warlock-7B-v2 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Naphula/Warlock-7B-v2"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Naphula/Warlock-7B-v2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/Naphula/Warlock-7B-v2
How to use Naphula/Warlock-7B-v2 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Naphula/Warlock-7B-v2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Naphula/Warlock-7B-v2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Naphula/Warlock-7B-v2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Naphula/Warlock-7B-v2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use Naphula/Warlock-7B-v2 with Docker Model Runner:
docker model run hf.co/Naphula/Warlock-7B-v2
Grimoires:
- v0a - DARE_TIES of 4 models (A+B+C+E)
- v0b - DARE_TIES of 4 models (A+C+D+E)
- v0c - SLERP of 2 models (C+E)
- v0d - DARE_TIES of 5 models, balanced evenly
- v0e - DARE_TIES of 5 models, 3 heavy 2 light
- v0f - DARE_TIES of 4 models (A+B+D+E)
- v0g - DARE_TIES of 3 models (A+B+E)
- v0h - SLERP of 2 models (B+E)
- v0i - SLERP of 2 models (A+B)
- v0j - DARE_TIES of 5 models, balanced unevenly, B heavy
- v0k - DARE_TIES of 5 models, balanced unevenly, A heavy
- v2a - KARCHER of the same 5 models as v0k, balanced evenly
architecture: MistralForCausalLM
merge_method: karcher
dtype: bfloat16
models:
- model: A:\LLM\.cache\huggingface\hub\!models--dphn--dolphin-2.8-mistral-7b-v02\fixed
- model: A:\LLM\.cache\huggingface\hub\!models--fearlessdots--WizardLM-2-7B-abliterated\fixed
- model: A:\LLM\.cache\huggingface\hub\!models--KoboldAI--Mistral-7B-Erebus-v3
- model: A:\LLM\.cache\huggingface\hub\!models--LeroyDyer--SpydazWeb_AI_HumanAI_RP
- model: A:\LLM\.cache\huggingface\hub\!models--maywell--PiVoT-0.1-Evil-a\fixed
parameters:
tokenizer:
source: union
chat_template: autobase_model: fearlessdots/WizardLM-2-7B-abliterated
merge_method: dare_ties
architecture: MistralForCausalLM
dtype: bfloat16
models:
- model: dphn/dolphin-2.8-mistral-7b-v02
parameters:
density: 0.55
weight: 0.55
- model: fearlessdots/WizardLM-2-7B-abliterated
parameters:
density: 0.4
weight: 0.2
- model: KoboldAI/Mistral-7B-Erebus-v3
parameters:
density: 0.2
weight: 0.05
- model: LeroyDyer/SpydazWeb_AI_HumanAI_RP
parameters:
density: 0.3
weight: 0.1
- model: maywell/PiVoT-0.1-Evil-a
parameters:
density: 0.3
weight: 0.1
tokenizer:
source: union
chat_template: auto
pip install library_name
# Paste your example code here
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("your/model_name")