Text Generation
Transformers
Safetensors
llama
datadreamer
datadreamer-0.28.0
Synthetic
text-generation-inference
Instructions to use dagger-realms/narrative_to_dagger with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use dagger-realms/narrative_to_dagger with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="dagger-realms/narrative_to_dagger")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dagger-realms/narrative_to_dagger") model = AutoModelForCausalLM.from_pretrained("dagger-realms/narrative_to_dagger") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use dagger-realms/narrative_to_dagger with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "dagger-realms/narrative_to_dagger" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "dagger-realms/narrative_to_dagger", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/dagger-realms/narrative_to_dagger
- SGLang
How to use dagger-realms/narrative_to_dagger with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "dagger-realms/narrative_to_dagger" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "dagger-realms/narrative_to_dagger", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "dagger-realms/narrative_to_dagger" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "dagger-realms/narrative_to_dagger", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use dagger-realms/narrative_to_dagger with Docker Model Runner:
docker model run hf.co/dagger-realms/narrative_to_dagger
| { | |
| "model_card": { | |
| "Date & Time": "2024-04-07T01:56:02.043003", | |
| "Model Card": [ | |
| "https://huggingface.co/LouisML/tinyllama_32k" | |
| ], | |
| "License Information": [ | |
| "apache-2.0" | |
| ], | |
| "Citation Information": [ | |
| "\n@inproceedings{Wolf_Transformers_State-of-the-Art_Natural_2020,\n author = {Wolf, Thomas and Debut, Lysandre and Sanh, Victor and Chaumond, Julien", | |
| "\n@Misc{peft,\n title = {PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods},\n author = {Sourab Mangrulkar and Sylvain Gugger and Lysandre Debut and Younes" | |
| ] | |
| }, | |
| "data_card": { | |
| "Create Dataset of Locations": { | |
| "Date & Time": "2024-04-06T10:17:45.009636" | |
| }, | |
| "zipped(Create Dataset of Locations, Generate Fiction for Each Location (select_columns))": { | |
| "Date & Time": "2024-04-06T15:17:35.786655" | |
| }, | |
| "zipped(zipped(Create Dataset of Locations, Generate Fiction for Each Location (select_columns)), Clean Location Subset JSON)": { | |
| "Date & Time": "2024-04-06T15:17:39.288866" | |
| }, | |
| "zipped(zipped(Create Dataset of Locations, Generate Fiction for Each Location (select_columns)), Clean Location Subset JSON) (train split)": { | |
| "Date & Time": "2024-04-06T15:17:39.562538" | |
| } | |
| }, | |
| "__version__": "0.28.0", | |
| "datetime": "2024-04-06T23:37:47.115672", | |
| "type": "TrainHFFineTune", | |
| "name": "Train a Fictional Story => DAGGER JSON Model", | |
| "version": 1.0, | |
| "fingerprint": "91f4a2305e0abfa9", | |
| "req_versions": { | |
| "dill": "0.3.8", | |
| "sqlitedict": "2.1.0", | |
| "torch": "2.2.2", | |
| "numpy": "1.26.4", | |
| "transformers": "4.39.3", | |
| "datasets": "2.18.0", | |
| "huggingface_hub": "0.22.2", | |
| "accelerate": "0.28.0", | |
| "peft": "0.10.0", | |
| "tiktoken": "0.6.0", | |
| "tokenizers": "0.15.2", | |
| "openai": "1.16.2", | |
| "ctransformers": "0.2.27", | |
| "optimum": "1.18.0", | |
| "bitsandbytes": "0.43.0", | |
| "litellm": "1.31.14", | |
| "trl": "0.8.1", | |
| "setfit": "1.0.3" | |
| }, | |
| "interpreter": "3.10.9 (main, Apr 17 2023, 21:32:03) [GCC 7.5.0]" | |
| } |