databricks/databricks-dolly-15k
Viewer • Updated • 15k • 33.3k • 962
How to use T145/KRONOS-8B-V1-P3 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="T145/KRONOS-8B-V1-P3")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("T145/KRONOS-8B-V1-P3")
model = AutoModelForCausalLM.from_pretrained("T145/KRONOS-8B-V1-P3")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use T145/KRONOS-8B-V1-P3 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "T145/KRONOS-8B-V1-P3"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "T145/KRONOS-8B-V1-P3",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/T145/KRONOS-8B-V1-P3
How to use T145/KRONOS-8B-V1-P3 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "T145/KRONOS-8B-V1-P3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "T145/KRONOS-8B-V1-P3",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "T145/KRONOS-8B-V1-P3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "T145/KRONOS-8B-V1-P3",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use T145/KRONOS-8B-V1-P3 with Docker Model Runner:
docker model run hf.co/T145/KRONOS-8B-V1-P3
This is a merge of Meta Llama 3.1 Instruct and REILIX's "750MB" LORA, created using llm-tools.
The primary purpose of this model is to be merged into other models in the same family using the TIES merge method.
Creating quants for this is entirely unnecessary.
The following Bash command was used to produce this model:
python /llm-tools/merge-lora.py -m unsloth/Meta-Llama-3.1-8B-Instruct -l REILX/Llama-3-8B-Instruct-750Mb-lora
Detailed results can be found here! Summarized results can be found here!
| Metric | Value (%) |
|---|---|
| Average | 25.67 |
| IFEval (0-Shot) | 71.37 |
| BBH (3-Shot) | 30.27 |
| MATH Lvl 5 (4-Shot) | 18.35 |
| GPQA (0-shot) | 1.34 |
| MuSR (0-shot) | 5.96 |
| MMLU-PRO (5-shot) | 26.72 |