Instructions to use marcuscedricridia/Konjac-0.6B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use marcuscedricridia/Konjac-0.6B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="marcuscedricridia/Konjac-0.6B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("marcuscedricridia/Konjac-0.6B") model = AutoModelForCausalLM.from_pretrained("marcuscedricridia/Konjac-0.6B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use marcuscedricridia/Konjac-0.6B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "marcuscedricridia/Konjac-0.6B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "marcuscedricridia/Konjac-0.6B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/marcuscedricridia/Konjac-0.6B
- SGLang
How to use marcuscedricridia/Konjac-0.6B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "marcuscedricridia/Konjac-0.6B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "marcuscedricridia/Konjac-0.6B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "marcuscedricridia/Konjac-0.6B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "marcuscedricridia/Konjac-0.6B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use marcuscedricridia/Konjac-0.6B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for marcuscedricridia/Konjac-0.6B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for marcuscedricridia/Konjac-0.6B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for marcuscedricridia/Konjac-0.6B to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="marcuscedricridia/Konjac-0.6B", max_seq_length=2048, ) - Docker Model Runner
How to use marcuscedricridia/Konjac-0.6B with Docker Model Runner:
docker model run hf.co/marcuscedricridia/Konjac-0.6B
Konjac-0.6B-exp Model Description
Overview
Konjac-0.6B-exp is an experimental, creative writing model designed for uncensored roleplaying and narrative generation. It can generate short stories with a high degree of creative freedom and fluidity. This model is tuned for generating engaging and imaginative content that can span various genres, featuring diverse characters and scenarios. The name "Konjac" comes from its goal to be small yet effective for creative applications.
This model is not designed for reasoning or structured logic, as it does not incorporate traditional forms of inference. Instead, it generates output based purely on patterns in the data it was trained on, focusing on creativity and narrative development.
Note: The model's uncensored output can sometimes be inconsistent, depending on the prompt, as it is still being refined to handle such cases effectively. Expect to see updates in future iterations.
Intended Use
- Creative Writing: Ideal for generating short-form stories, dialogues, and roleplay scenarios.
- Roleplay: Designed to facilitate interactive fiction or creative text-based roleplay experiences.
- Uncensored Content: It allows for the generation of uncensored content, but this may vary depending on the prompt used.
Key Features
- Size: 0.6 billion parameters, offering a balance between performance and size, making it suitable for devices like phones.
- Uncensored: Allows freedom in output generation, though it may be inconsistent at times.
- Roleplay Focused: Built with a focus on generating creative and dynamic storytelling for roleplay and creative writing.
- Short Stories: Primarily focused on generating short stories that are coherent, engaging, and sometimes experimental.
Model Limitations
- No Reasoning Capabilities: This model was fine-tuned to avoid reasoning, which limits its ability to generate logical conclusions or long, structured outputs. This may change in future versions.
- Uncensored Output: The model's ability to generate uncensored text is currently imperfect, and certain prompts may not result in uncensored outputs.
- Limited Contextual Understanding: Since the model was trained on responses only (without user or system prompts), it might behave differently depending on the provided input.
Recommendations for Usage
Here is an example of how to use this model with the transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
import torch
import threading
model_name = "marcuscedricridia/Konjac-0.6B"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Prepare input
prompt = """
Please write a story using the following writing prompt: Demons have to do at least one evil thing every day to survive. This one comes to your bakery everyday to buy bread for the homeless kids and steal exactly one cookie.
The title of this story should be: The Baker's Demon
It should feature the following genres: Fantasy, Drama
"""
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Use streamer
streamer = TextIteratorStreamer(tokenizer, skip_special_tokens=True)
# Generation parameters
generation_kwargs = dict(
**inputs,
streamer=streamer,
max_new_tokens=8000,
temperature=0.8, # controls randomness (higher = more random)
top_k=50, # limits token sampling to top-k tokens
top_p=0.95, # nucleus sampling, considers top tokens with p cumulative prob
repetition_penalty=1.1, # penalizes repeated tokens
do_sample=True # required for sampling to take effect
)
# Run generation in a thread to allow streaming
thread = threading.Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
# Read streamed output
print("Streaming output:")
for token in streamer:
print(token, end="", flush=True)
Future Developments
Model Enhancements: Future versions of the model will aim to fix the issues around inconsistent uncensored output and potentially reintroduce reasoning capabilities.
Larger Outputs: We plan to refine the model to generate longer and more complex narratives, similar to the styles of well-known models like GLM, Gemma, O3, and O4, with improved formatting and creative titles.
Exploration of Parameters: New training will focus on increasing the creative and thematic variety while maintaining short-form coherence.
Known Issues
Inconsistent Uncensored Output: The uncensored functionality is still being refined. Sometimes, the model may refuse to generate uncensored content depending on the prompt.
Size Limitation: The current version will likely remain the smallest in the Konjac family, with future models focusing on improving variations, iterations, and fixes.
- Downloads last month
- 6