goomafia commited on
Commit
12a9f77
·
verified ·
1 Parent(s): 7fdff00

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +46 -12
  2. app.py +51 -0
README.md CHANGED
@@ -1,12 +1,46 @@
1
- ---
2
- title: Uncensored Deepseek Qwen14b
3
- emoji: 🚀
4
- colorFrom: pink
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 5.49.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Uncensored DeepSeek Qwen 14B
3
+ emoji: 🧠
4
+ colorFrom: indigo
5
+ colorTo: purple
6
+ sdk: gradio
7
+ python_version: 3.10
8
+ sdk_version: 4.43.0
9
+ suggested_hardware: a100-large
10
+ suggested_storage: large
11
+ app_file: app.py
12
+ app_port: 7860
13
+ fullWidth: true
14
+ header: mini
15
+ short_description: Large Uncensored Model powered by Qwen-14B Distill
16
+ tags:
17
+ - text-generation
18
+ - qwen
19
+ - uncensored
20
+ - deepseek
21
+ models:
22
+ - uncensoredai/UncensoredLM-DeepSeek-R1-Distill-Qwen-14B
23
+ preload_from_hub:
24
+ - uncensoredai/UncensoredLM-DeepSeek-R1-Distill-Qwen-14B
25
+ ---
26
+ # 🧠 Uncensored DeepSeek Qwen 14B
27
+
28
+ This Space runs the **UncensoredLM-DeepSeek-R1-Distill-Qwen-14B** model using the Text Generation Web UI.
29
+ It supports long-context chat, system prompts, and instruction tuning.
30
+
31
+ ### 🔧 Features
32
+ - Full 14B parameter model (Distilled from Qwen)
33
+ - Mixed precision for performance
34
+ - Context length up to 131K tokens
35
+ - Thai + English bilingual support
36
+ - Gradio 4.43 interface
37
+
38
+ ### 💡 Usage
39
+ 1. Type your message in the chat box.
40
+ 2. Click “Generate”.
41
+ 3. Enjoy smooth, uncensored text generation.
42
+
43
+ ---
44
+
45
+ 🛠️ Built with ❤️ by **โน๊ต**
46
+ Running on Hugging Face Spaces (Gradio SDK)
app.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
3
+ import torch
4
+ from threading import Thread
5
+
6
+ MODEL_ID = "uncensoredai/UncensoredLM-DeepSeek-R1-Distill-Qwen-14B"
7
+
8
+ # โหลดโมเดล
9
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
10
+ model = AutoModelForCausalLM.from_pretrained(
11
+ MODEL_ID,
12
+ torch_dtype=torch.float16,
13
+ device_map="auto"
14
+ )
15
+
16
+ def generate_text(prompt, temperature=0.8, top_p=0.9, max_new_tokens=512):
17
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
18
+ streamer = TextIteratorStreamer(tokenizer, skip_prompt=True)
19
+ generation_kwargs = dict(
20
+ **inputs,
21
+ streamer=streamer,
22
+ temperature=temperature,
23
+ top_p=top_p,
24
+ do_sample=True,
25
+ max_new_tokens=max_new_tokens
26
+ )
27
+ thread = Thread(target=model.generate, kwargs=generation_kwargs)
28
+ thread.start()
29
+ output = ""
30
+ for new_text in streamer:
31
+ output += new_text
32
+ yield output
33
+
34
+ with gr.Blocks(title="Uncensored DeepSeek Qwen 14B") as demo:
35
+ gr.Markdown("## 🧠 Uncensored DeepSeek Qwen 14B")
36
+ gr.Markdown("Thai & English Chatbot – Powered by Qwen 14B Distilled Model")
37
+
38
+ with gr.Row():
39
+ with gr.Column(scale=3):
40
+ prompt = gr.Textbox(label="Input", placeholder="พิมพ์ข้อความที่นี่...", lines=3)
41
+ temperature = gr.Slider(0.1, 1.5, value=0.8, step=0.1, label="Temperature")
42
+ top_p = gr.Slider(0.1, 1.0, value=0.9, step=0.05, label="Top P")
43
+ max_new_tokens = gr.Slider(64, 2048, value=512, step=64, label="Max New Tokens")
44
+ btn = gr.Button("Generate")
45
+
46
+ with gr.Column(scale=5):
47
+ output = gr.Textbox(label="AI Response", lines=20)
48
+
49
+ btn.click(generate_text, inputs=[prompt, temperature, top_p, max_new_tokens], outputs=output)
50
+
51
+ demo.queue().launch()