GitHub Action commited on
Commit
b931367
·
0 Parent(s):

Sync ling-space changes (filtered) from commit 127300e

Browse files
.gitignore ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ pip-wheel-metadata/
24
+ share/python-wheels/
25
+ *.egg-info/
26
+ .installed.cfg
27
+ *.egg
28
+ MANIFEST
29
+
30
+ # PyInstaller
31
+ # Usually these files are written by a python script from a template
32
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
33
+ *.manifest
34
+ *.spec
35
+
36
+ # Installer logs
37
+ pip-log.txt
38
+ pip-delete-this-directory.txt
39
+
40
+ # Unit test / coverage reports
41
+ htmlcov/
42
+ .tox/
43
+ .nox/
44
+ .coverage
45
+ .coverage.*
46
+ .cache
47
+ nosetests.xml
48
+ coverage.xml
49
+ *.cover
50
+ *.py,cover
51
+ .hypothesis/
52
+ .pytest_cache/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ target/
76
+
77
+ # Jupyter Notebook
78
+ .ipynb_checkpoints
79
+
80
+ # IPython
81
+ profile_default/
82
+ ipython_config.py
83
+
84
+ # pyenv
85
+ .python-version
86
+
87
+ # celery beat schedule file
88
+ celerybeat-schedule
89
+
90
+ # SageMath parsed files
91
+ *.sage.py
92
+
93
+ # Environments
94
+ .env
95
+ .venv
96
+ env/
97
+ venv/
98
+ ENV/
99
+ env.bak/
100
+ venv.bak/
101
+
102
+ # Spyder project settings
103
+ .spyderproject
104
+ .spyderworkspace
105
+
106
+ # Rope project settings
107
+ .ropeproject
108
+
109
+ # mkdocs documentation
110
+ /site
111
+
112
+ # mypy
113
+ .mypy_cache/
114
+ .dmypy.json
115
+ dmypy.json
116
+
117
+ # Pyre type checker
118
+ .pyre/
119
+
120
+ # Personal
121
+ .secrets
122
+ secrets.cfg
AGENTS.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 项目章程: ling-space
2
+
3
+ ## 1. 项目概述
4
+
5
+ - **目标**: 作为 Ling 系列模型的综合展示空间,集成文本生成、图像生成等多模态能力。
6
+ - **HF Space 链接**: (部署后填写)
7
+
8
+ ## 2. 技术选型
9
+
10
+ - **SDK**: Gradio
11
+ - **核心模型**: Ling 系列模型 (TBD)
12
+ - **核心 Python 库**: gradio, transformers, torch
13
+
14
+ ## 3. 数据源
15
+
16
+ - **输入数据**: 用户输入的文本、图像等。
17
+ - **输出数据**: 模型生成的文本、图像等。
18
+
19
+ ## 4. 开发规范
20
+
21
+ - **代码风格**: Black
22
+ - **测试框架**: Pytest
23
+ - **Git 工作流**: 遵循 `project_management_for_hf_spaces` skill 中定义的分支与部署流程。
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Ling Space
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: gradio
7
+ sdk_version: 5.49.1
8
+ python_version: 3.13.7
9
+ app_file: app.py
10
+ pinned: false
11
+ secrets:
12
+ - OPENAI_API_KEY
13
+ - LING_API_TOKEN
14
+ ---
15
+
16
+ # Ling Space
17
+
18
+ 这是一个用于与 Ling 系列模型进行交互的 Gradio 应用。
19
+
20
+ ## 功能
21
+
22
+ - **文本聊天**: 与 Ling 模型进行对话。
23
+ - **模型选择**: 在多个可用模型之间切换。
24
+ - **可配置性**: 通过配置文件轻松添加和管理模型。
25
+
26
+ ## 密钥管理 (Secrets Management)
27
+
28
+ ### 本地开发 (Local Development)
29
+
30
+ 1. **创建密钥文件**: 在 `ling-space` 目录下,创建一个名为 `.secrets` 的文件。
31
+ 2. **添加密钥**: 在 `.secrets` 文件中,以 `KEY="VALUE"` 的格式添加你的 API 密钥。例如:
32
+ ```
33
+ OPENAI_API_KEY="your_openai_api_key_here"
34
+ LING_API_TOKEN="your_ling_api_token_here"
35
+ ```
36
+ 3. **代码加载**: `model_handler.py` 使用 `python-dotenv` 库自动从 `.secrets` 文件加载这些环境变量。你可以在代码中通过 `os.environ.get("YOUR_KEY_NAME")` 来访问它们。
37
+
38
+ **注意**: `.secrets` 文件已被 `.gitignore` 忽略,确保你的密钥不会被意外提交。
39
+
40
+ ### 生产环境 (Hugging Face Spaces)
41
+
42
+ 1. **设置 Secret**: 在你的 Hugging Face Space 的 "Settings" -> "Secrets and variables" 页面,添加你的 API 密钥作为 "Secret"。
43
+ 2. **声明 Secret**: 本 `README.md` 文件顶部的 `secrets` 字段已经声明了应用需要哪些密钥。Hugging Face 会在运行时将这些 Secrets 作为环境变量注入。
44
+ 3. **代码访问**: 你的代码无需修改,它会像在本地一样通过 `os.environ.get("YOUR_KEY_NAME")` 自动访问这些密钥。
app.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import uuid
3
+ from datetime import datetime
4
+ import pandas as pd
5
+ from model_handler import ModelHandler
6
+ from tab_chat import create_chat_tab
7
+ from tab_code import create_code_tab
8
+ from tab_smart_writer import create_smart_writer_tab
9
+ from tab_test import run_model_handler_test, run_clear_chat_test
10
+
11
+ def get_history_df(history):
12
+ if not history:
13
+ return pd.DataFrame({'ID': [], 'Conversation': []})
14
+ df = pd.DataFrame(history)
15
+ return df[['id', 'title']].rename(columns={'id': 'ID', 'title': 'Conversation'})
16
+
17
+ def on_app_load(history, conv_id):
18
+ """
19
+ Handles the application's initial state on load.
20
+ - If no history exists, creates a new conversation.
21
+ - If the last conversation ID is invalid, loads the most recent one.
22
+ - Otherwise, loads the last active conversation.
23
+ """
24
+ if not history:
25
+ # First time ever loading, create a new chat
26
+ conv_id = str(uuid.uuid4())
27
+ new_convo = { "id": conv_id, "title": "New Conversation", "messages": [], "timestamp": datetime.now().isoformat() }
28
+ history = [new_convo]
29
+ return conv_id, history, gr.update(value=get_history_df(history)), []
30
+
31
+ # Check if the last used conv_id is valid
32
+ if conv_id and any(c["id"] == conv_id for c in history):
33
+ # It's valid, load it
34
+ for convo in history:
35
+ if convo["id"] == conv_id:
36
+ return conv_id, history, gr.update(value=get_history_df(history)), convo["messages"]
37
+
38
+ # Last used conv_id is invalid or doesn't exist, load the most recent conversation
39
+ most_recent_convo = history[0] # Assumes history is sorted by timestamp desc
40
+ conv_id = most_recent_convo["id"]
41
+ return conv_id, history, gr.update(value=get_history_df(history)), most_recent_convo["messages"]
42
+
43
+
44
+ CSS = """
45
+
46
+ #chatbot {
47
+ height: calc(100vh - 21px - 16px);
48
+ max-height: 1500px;
49
+ }
50
+
51
+ footer {
52
+ display: none !important;
53
+ }
54
+ """
55
+
56
+ if __name__ == "__main__":
57
+ # Instantiate the model handler with the configuration
58
+ model_handler = ModelHandler()
59
+
60
+ with gr.Blocks(theme=gr.themes.Soft(),
61
+ css=CSS,
62
+ head="",
63
+ head_paths=['./static/toastify.html', './static/app.html'],
64
+ fill_height=True,
65
+ fill_width=True) as demo:
66
+ with gr.Tabs(elem_id='indicator-space-app') as tabs:
67
+
68
+ with gr.TabItem("文本聊天") as chat_tab:
69
+ conversation_store, current_conversation_id, history_df, chatbot = create_chat_tab()
70
+
71
+ chat_tab.select(
72
+ fn=None,
73
+ js="() => {window.dispatchEvent(new CustomEvent('tabSelect.chat')); console.log('this'); return null;}",
74
+ )
75
+
76
+ with gr.TabItem("代码生成") as code_tab:
77
+ create_code_tab()
78
+
79
+ code_tab.select(
80
+ fn=None,
81
+ js="() => {window.dispatchEvent(new CustomEvent('tabSelect.code')); return null;}",
82
+ )
83
+
84
+ with gr.TabItem("写作助手") as writer_tab:
85
+ create_smart_writer_tab()
86
+
87
+ writer_tab.select(
88
+ fn=None,
89
+ js="() => {window.dispatchEvent(new CustomEvent('tabSelect.writing')); return null;}",
90
+ )
91
+
92
+ with gr.TabItem("测试"):
93
+ gr.Markdown("# 功能测试")
94
+ with gr.Column():
95
+ test_log_output = gr.Textbox(label="测试日志", interactive=False, lines=10)
96
+ gr.Button("运行 ModelHandler 测试").click(run_model_handler_test, outputs=test_log_output)
97
+ gr.Button("运行 清除聊天 测试").click(run_clear_chat_test, outputs=test_log_output)
98
+
99
+ # Bind on_app_load to demo.load
100
+ demo.load(
101
+ on_app_load,
102
+ inputs=[conversation_store, current_conversation_id],
103
+ outputs=[current_conversation_id, conversation_store, history_df, chatbot],
104
+ js="() => {window.dispatchEvent(new CustomEvent('appStart')); console.log('appStart'); return {};}"
105
+ )
106
+
107
+ # Launch the Gradio application
108
+ demo.launch()
config.py ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Configuration file for the Ling Spaces application.
3
+
4
+ This file centralizes all the configuration variables, such as API endpoints,
5
+ API keys, and system prompts for different functionalities.
6
+ """
7
+
8
+ import os
9
+ from dotenv import load_dotenv
10
+
11
+ # Load environment variables from .secrets file
12
+ load_dotenv(dotenv_path='.secrets')
13
+
14
+ # --- API Configuration ---
15
+ # API endpoint for OpenAI compatible services
16
+ OPEN_AI_ENTRYPOINT = os.getenv("OPEN_AI_ENTRYPOINT") or "https://api.openai.com/v1"
17
+ # API key for OpenAI compatible services
18
+ OPEN_AI_KEY = os.getenv("OPEN_AI_KEY")
19
+ # Brand name of the OpenAI compatible provider
20
+ OPEN_AI_PROVIDER = os.getenv("OPEN_AI_PROVIDER") or "OpenAI Compatible API"
21
+
22
+ # Fallback/warning for API keys
23
+ if not OPEN_AI_KEY:
24
+ print("⚠️ Warning: OPEN_AI_KEY is not set. Remote models may not function correctly.")
25
+ if not OPEN_AI_ENTRYPOINT:
26
+ print("⚠️ Warning: OPEN_AI_ENTRYPOINT is not set. Using default: https://api.openai.com/v1")
27
+
28
+
29
+ # --- Model Specifications ---
30
+
31
+ # Constants for easy referencing of models
32
+ LING_MINI_2_0 = "ling-mini-2.0"
33
+ LING_1T = "ling-1t"
34
+ LING_FLASH_2_0 = "ling-flash-2.0"
35
+ RING_1T = "ring-1t"
36
+ RING_FLASH_2_0 = "ring-flash-2.0"
37
+ RING_MINI_2_0 = "ring-mini-2.0"
38
+
39
+
40
+ CHAT_MODEL_SPECS = {
41
+ LING_MINI_2_0: {
42
+ "provider": "openai_compatible",
43
+ "model_id": "inclusionai/ling-mini-2.0",
44
+ "display_name": "🦉 Ling-mini-2.0",
45
+ "description": "A lightweight conversational model optimized for efficient operation on consumer-grade hardware, ideal for mobile or localized deployment scenarios.",
46
+ "url": "https://huggingface.co/inclusionai"
47
+ },
48
+ LING_1T: {
49
+ "provider": "openai_compatible",
50
+ "model_id": "inclusionai/ling-1t",
51
+ "display_name": "🦉 Ling-1T",
52
+ "description": "A trillion-parameter large language model designed for complex natural language understanding and generation tasks that require extreme performance and high fluency.",
53
+ "url": "https://huggingface.co/inclusionai"
54
+ },
55
+ LING_FLASH_2_0: {
56
+ "provider": "openai_compatible",
57
+ "model_id": "inclusionai/ling-flash-2.0",
58
+ "display_name": "🦉 Ling-flash-2.0",
59
+ "description": "A high-performance billion-parameter model optimized for scenarios requiring high-speed response and complex instruction following.",
60
+ "url": "https://huggingface.co/inclusionai"
61
+ },
62
+ RING_1T: {
63
+ "provider": "openai_compatible",
64
+ "model_id": "inclusionai/ring-1t",
65
+ "display_name": "💍️ Ring-1T",
66
+ "description": "A brand-new trillion-parameter reasoning model with powerful code generation and tool use capabilities.",
67
+ "url": "https://huggingface.co/inclusionai"
68
+ },
69
+ RING_FLASH_2_0: {
70
+ "provider": "openai_compatible",
71
+ "model_id": "inclusionai/ring-flash-2.0",
72
+ "display_name": "💍️ Ring-flash-2.0",
73
+ "description": "A billion-parameter reasoning model that strikes a good balance between performance and cost, suitable for general-purpose tasks that require step-by-step thinking or code generation.",
74
+ "url": "https://huggingface.co/inclusionai"
75
+ },
76
+ RING_MINI_2_0: {
77
+ "provider": "openai_compatible",
78
+ "model_id": "inclusionai/ring-mini-2.0",
79
+ "display_name": "💍️ Ring-mini-2.0",
80
+ "description": "A quantized and extremely efficient reasoning model designed for resource-constrained environments with strict speed and efficiency requirements (such as edge computing).",
81
+ "url": "https://huggingface.co/inclusionai"
82
+ }
83
+ }
84
+
85
+ # --- Code Framework Specifications ---
86
+
87
+ # Constants for easy referencing of code frameworks
88
+ STATIC_PAGE = "static_page"
89
+ GRADIO_APP = "gradio_app"
90
+
91
+ CODE_FRAMEWORK_SPECS = {
92
+ STATIC_PAGE: {
93
+ "display_name": "静态页面",
94
+ "description": "生成一个独立的、响应式的 HTML 文件,包含所有必要的 CSS 和 JavaScript。适合快速原型和简单的网页展示。"
95
+ }
96
+ }
97
+
98
+
99
+ # --- Utility Functions ---
100
+
101
+ _current_provider_name = OPEN_AI_PROVIDER
102
+
103
+ def set_current_provider(provider_name: str):
104
+ """Sets the current API provider name."""
105
+ global _current_provider_name
106
+ _current_provider_name = provider_name
107
+
108
+ def get_current_provider_name() -> str:
109
+ """Returns the current API provider name."""
110
+ return _current_provider_name
111
+
112
+ def get_model_id(model_constant: str) -> str:
113
+ """
114
+ Retrieves the internal model ID for a given model constant.
115
+ This is typically what's passed to the underlying API.
116
+ """
117
+ return CHAT_MODEL_SPECS.get(model_constant, {}).get("model_id", model_constant)
118
+
119
+ def get_model_display_name(model_constant: str) -> str:
120
+ """
121
+ Retrieves the display name for a given model constant.
122
+ This is what's shown in the UI.
123
+ """
124
+ return CHAT_MODEL_SPECS.get(model_constant, {}).get("display_name", model_constant)
125
+
ling-space.iml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ <?xml version="1.0" encoding="UTF-8"?>
2
+ <module type="PYTHON_MODULE" version="4">
3
+ <component name="NewModuleRootManager" inherit-compiler-output="true">
4
+ <exclude-output />
5
+ <content url="file://$MODULE_DIR$" />
6
+ <orderEntry type="jdk" jdkName="Python 3.13 virtualenv at ~/workspace/ling-series-hf-spaces/ling-space/.venv" jdkType="Python SDK" />
7
+ <orderEntry type="sourceFolder" forTests="false" />
8
+ </component>
9
+ </module>
model_handler.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from abc import ABC, abstractmethod
2
+ import httpx
3
+ import json
4
+ from config import CHAT_MODEL_SPECS, OPEN_AI_KEY, OPEN_AI_ENTRYPOINT, OPEN_AI_PROVIDER, get_model_id
5
+
6
+ class ModelProvider(ABC):
7
+ """
8
+ Abstract base class for a model provider. This allows for different
9
+ backends (e.g., local, OpenAI API) to be used interchangeably.
10
+ """
11
+ def __init__(self, provider_name):
12
+ self.provider_name = provider_name
13
+
14
+ @abstractmethod
15
+ def get_response(self, model_id, message, chat_history):
16
+ """
17
+ Generates a response from a model.
18
+
19
+ :param model_id: The internal model ID to use.
20
+ :param message: The user's message.
21
+ :param chat_history: The current chat history.
22
+ :return: A generator that yields the response.
23
+ """
24
+ pass
25
+
26
+ class OpenAICompatibleProvider(ModelProvider):
27
+ """
28
+ A model provider for any OpenAI compatible API.
29
+ """
30
+ def __init__(self, provider_name="openai_compatible"):
31
+ super().__init__(provider_name)
32
+ self.api_key = OPEN_AI_KEY
33
+ self.api_base = OPEN_AI_ENTRYPOINT
34
+
35
+ if not self.api_key or not self.api_base:
36
+ print("Warning: OPEN_AI_KEY or OPEN_AI_ENTRYPOINT not found in environment.")
37
+
38
+ def get_response(self, model_id, message, chat_history, system_prompt, temperature=0.7):
39
+ """
40
+ Makes a real API call to an OpenAI compatible API and streams the response.
41
+ """
42
+ print(f"DEBUG: Received system_prompt: {system_prompt}, temperature: {temperature}") # Debug print
43
+ headers = {
44
+ "Authorization": f"Bearer {self.api_key}",
45
+ "Content-Type": "application/json",
46
+ }
47
+
48
+ # Build message history for API call
49
+ messages_for_api = []
50
+ if system_prompt: # 如果有系统提示词,添加到最前面
51
+ messages_for_api.append({"role": "system", "content": system_prompt})
52
+
53
+ if chat_history:
54
+ for item in chat_history:
55
+ if isinstance(item, dict) and "role" in item and "content" in item:
56
+ messages_for_api.append(item)
57
+ messages_for_api.append({"role": "user", "content": message})
58
+
59
+ json_data = {
60
+ "model": model_id,
61
+ "messages": messages_for_api, # Use the new list
62
+ "stream": True,
63
+ "temperature": temperature,
64
+ }
65
+
66
+ # Append user's message to chat_history for UI display
67
+ chat_history.append({"role": "user", "content": message})
68
+ # Initialize assistant's response in chat_history
69
+ chat_history.append({"role": "assistant", "content": ""}) # Placeholder for assistant's streaming response
70
+
71
+ try:
72
+ with httpx.stream(
73
+ "POST",
74
+ f"{self.api_base}/chat/completions",
75
+ headers=headers,
76
+ json=json_data,
77
+ timeout=120,
78
+ ) as response:
79
+ response.raise_for_status()
80
+ for chunk in response.iter_lines():
81
+ if chunk.startswith("data:"):
82
+ chunk = chunk[5:].strip()
83
+ if chunk == "[DONE]":
84
+ break
85
+ try:
86
+ data = json.loads(chunk)
87
+ if "choices" in data and data["choices"]:
88
+ delta = data["choices"][0].get("delta", {})
89
+ content_chunk = delta.get("content")
90
+ if content_chunk:
91
+ chat_history[-1]["content"] += content_chunk
92
+ yield chat_history
93
+ except json.JSONDecodeError:
94
+ print(f"Error decoding JSON chunk: {chunk}")
95
+ except Exception as e:
96
+ print(f"Error during API call: {e}")
97
+ # Ensure the last message (assistant's placeholder) is updated with the error
98
+ if chat_history and chat_history[-1]["role"] == "assistant":
99
+ chat_history[-1]["content"] = f"An error occurred: {e}"
100
+ else:
101
+ chat_history.append({"role": "assistant", "content": f"An error occurred: {e}"})
102
+ yield chat_history
103
+
104
+
105
+ def get_code_response(self, model_id, system_prompt, user_prompt, temperature=0.7):
106
+ """
107
+ Makes a real API call for code generation and streams the response.
108
+ """
109
+ headers = {
110
+ "Authorization": f"Bearer {self.api_key}",
111
+ "Content-Type": "application/json",
112
+ }
113
+
114
+ messages_for_api = [
115
+ {"role": "system", "content": system_prompt},
116
+ {"role": "user", "content": user_prompt}
117
+ ]
118
+
119
+ json_data = {
120
+ "model": model_id,
121
+ "messages": messages_for_api,
122
+ "stream": True,
123
+ "temperature": temperature,
124
+ }
125
+
126
+ try:
127
+ with httpx.stream("POST", f"{self.api_base}/chat/completions", headers=headers, json=json_data, timeout=120) as response:
128
+ response.raise_for_status()
129
+ for chunk in response.iter_lines():
130
+ if chunk.startswith("data:"):
131
+ chunk = chunk[5:].strip()
132
+ if chunk == "[DONE]":
133
+ break
134
+ try:
135
+ data = json.loads(chunk)
136
+ if "choices" in data and data["choices"]:
137
+ delta = data["choices"][0].get("delta", {})
138
+ content_chunk = delta.get("content")
139
+ if content_chunk:
140
+ yield content_chunk
141
+ except json.JSONDecodeError:
142
+ print(f"Error decoding JSON chunk: {chunk}")
143
+ except Exception as e:
144
+ print(f"Error during API call: {e}")
145
+ yield f"An error occurred: {e}"
146
+
147
+ class ModelHandler:
148
+ """
149
+ Manages different models and providers, acting as a facade for the UI.
150
+ """
151
+ def __init__(self):
152
+ """
153
+ Initializes the ModelHandler with the global CHAT_MODEL_SPECS.
154
+ """
155
+ self.config = CHAT_MODEL_SPECS
156
+ self.providers = {
157
+ "openai_compatible": OpenAICompatibleProvider()
158
+ }
159
+
160
+ self.api_provider_brand = OPEN_AI_PROVIDER
161
+
162
+ def get_response(self, model_constant, message, chat_history, system_prompt, temperature=0.7):
163
+ """
164
+ Gets a response from the appropriate model and provider.
165
+
166
+ :param model_constant: The constant name of the model (e.g., LING_MODEL_A).
167
+ :param message: The user's message.
168
+ :param chat_history: The current chat history.
169
+ :param system_prompt: The system prompt to guide the model's behavior.
170
+ :param temperature: The temperature for the model.
171
+ :return: A generator that yields the response.
172
+ """
173
+ model_spec = self.config.get(model_constant, {})
174
+ provider_name = model_spec.get("provider")
175
+ model_id = model_spec.get("model_id")
176
+
177
+ # Handle the case where chat_history might be None
178
+ if chat_history is None:
179
+ chat_history = []
180
+
181
+ if not provider_name or provider_name not in self.providers:
182
+ full_response = f"Error: Model '{model_constant}' or its provider '{provider_name}' not configured."
183
+ chat_history.append([message, full_response])
184
+ yield chat_history
185
+ return
186
+
187
+ provider = self.providers[provider_name]
188
+
189
+ yield from provider.get_response(model_id, message, chat_history, system_prompt, temperature)
190
+
191
+ def generate_code(self, system_prompt, user_prompt, code_type, model_choice):
192
+ """
193
+ Generates code using the specified model.
194
+ """
195
+ model_constant = next((k for k, v in CHAT_MODEL_SPECS.items() if v["display_name"] == model_choice), None)
196
+ if not model_constant:
197
+ # Fallback if display name not found, maybe model_choice is the constant itself
198
+ model_constant = model_choice if model_choice in CHAT_MODEL_SPECS else "LING_1T"
199
+
200
+
201
+ model_spec = self.config.get(model_constant, {})
202
+ provider_name = model_spec.get("provider")
203
+ model_id = model_spec.get("model_id")
204
+
205
+ if not provider_name or provider_name not in self.providers:
206
+ yield f"Error: Model '{model_constant}' or its provider '{provider_name}' not configured."
207
+ return
208
+
209
+ provider = self.providers[provider_name]
210
+ yield from provider.get_code_response(model_id, system_prompt, user_prompt)
recommand_config.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+
3
+ from config import LING_1T, LING_FLASH_2_0, RING_1T, RING_FLASH_2_0, LING_MINI_2_0, RING_MINI_2_0, get_model_display_name
4
+
5
+ """
6
+ This file contains the recommended initial inputs for the chat tab.
7
+ Each item in the `RECOMMENDED_INPUTS` list is a dictionary that represents a preset scenario.
8
+ """
9
+
10
+ RECOMMENDED_INPUTS = [
11
+ {
12
+ "task": "创意写作",
13
+ "model": get_model_display_name(LING_1T),
14
+ "system_prompt": "你是一位才华横溢的作家,擅长创作富有想象力的故事。",
15
+ "user_message": "写一个关于一只会说话的猫和它的机器人朋友的短篇故事。",
16
+ "temperature": 0.8,
17
+ },
18
+ {
19
+ "task": "代码生成",
20
+ "model": get_model_display_name(RING_1T),
21
+ "system_prompt": "你是一个精通多种编程语言的 AI 编程助手。",
22
+ "user_message": "用 Python 写一个函数,计算一个列表中的斐波那契数列。",
23
+ "temperature": 0.2,
24
+ },
25
+ {
26
+ "task": "邮件撰写",
27
+ "model": get_model_display_name(LING_FLASH_2_0),
28
+ "system_prompt": "你是一位专业的商务助理,擅长撰写清晰、简洁的商务邮件。",
29
+ "user_message": "帮我写一封邮件,向我的团队成员宣布我们下周五下午将举行一个项目启动会议。",
30
+ "temperature": 0.7,
31
+ },
32
+ {
33
+ "task": "学习计划",
34
+ "model": get_model_display_name(LING_MINI_2_0),
35
+ "system_prompt": "你是一位经验丰富的学习导师,能够为用户量身定制学习计划。",
36
+ "user_message": "我想学习弹吉他,请为我制定一个为期一个月的初学者入门计划。",
37
+ "temperature": 0.6,
38
+ },
39
+ {
40
+ "task": "角色扮演",
41
+ "model": get_model_display_name(RING_FLASH_2_0),
42
+ "system_prompt": "你现在是莎士比亚,请用他的风格和语言来回答问题。",
43
+ "user_message": "生存还是毁灭,这是一个值得考虑的问题。",
44
+ "temperature": 0.9,
45
+ },
46
+ {
47
+ "task": "技术问答",
48
+ "model": get_model_display_name(RING_MINI_2_0),
49
+ "system_prompt": "你是一位资深的软件工程师,精通各种技术栈。",
50
+ "user_message": "请解释一下什么是“容器化”,以及它与“虚拟化”有什么区别?",
51
+ "temperature": 0.4,
52
+ }
53
+ ]
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ gradio==5.49.1
2
+ python-dotenv
3
+ httpx
4
+ # transformers
5
+ # torch
static/app.html ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script>
2
+ (function () {
3
+
4
+ console.info("Space App JS executing...");
5
+
6
+ // --- Logger utility (structured, levelled, timestamped) ---
7
+ const SpaceApp = {
8
+
9
+ hasBooted: false,
10
+ fnUnloadLastTab: null,
11
+
12
+ toastInfo: function(...args) {
13
+ // use toastify
14
+ window.Toastify({
15
+ text: args.join(' '),
16
+ duration: 3000,
17
+ gravity: "top",
18
+ position: "right",
19
+ backgroundColor: "green",
20
+ }).showToast();
21
+ console.info("TOAST_INFO", ...args);
22
+ },
23
+
24
+ toastError: function(...args) {
25
+ window.Toastify({
26
+ text: args.join(' '),
27
+ duration: 5000,
28
+ gravity: "top",
29
+ position: "right",
30
+ backgroundColor: "red",
31
+ }).showToast();
32
+ console.error("TOAST_ERROR", ...args);
33
+ },
34
+
35
+ unloadLastTab: function() {
36
+ if (this.fnUnloadLastTab) {
37
+ this.fnUnloadLastTab();
38
+ this.fnUnloadLastTab = null;
39
+ }
40
+ },
41
+
42
+ TextGeneratorTab: (function () {
43
+ return {
44
+ init: function () {
45
+ console.info("TextGeneratorTab initialized.");
46
+ },
47
+ toggle: function () {
48
+ // Placeholder for future functionality
49
+ SpaceApp.toastInfo("TextGeneratorTab toggled.");
50
+ SpaceApp.unloadLastTab();
51
+ }
52
+ };
53
+ })(),
54
+
55
+ WebGeneratorTab: (function () {
56
+ return {
57
+ init: function () {
58
+ console.info("WebGeneratorTab initialized.");
59
+ },
60
+ toggle: function () {
61
+ // Placeholder for future functionality
62
+ SpaceApp.toastInfo("WebGeneratorTab toggled.");
63
+ SpaceApp.unloadLastTab();
64
+ }
65
+ };
66
+ })(),
67
+
68
+ WritingAssistantTab: (function () {
69
+ return {
70
+ init: function () {
71
+ console.info("WritingAssistantTab initialized.");
72
+ },
73
+ toggle: function () {
74
+ // Placeholder for future functionality
75
+ SpaceApp.toastInfo("WritingAssistantTab toggled.");
76
+ SpaceApp.unloadLastTab();
77
+
78
+ // 找到 id = 'writing-editor' 的元素,设置操作快捷键。
79
+ // Tab - 'btn-action-accept-flow' 按钮点击
80
+ // Shift + Tab - 'btn-action-change-flow' 按钮点击
81
+ // Cmd/Ctrl + Enter - 'btn-action-create-paragraph' 按钮点击
82
+ // Shift + Enter - 'btn-action-change-paragraph' 按钮点击
83
+ const editor = document.getElementById('writing-editor');
84
+ if (editor) {
85
+ // 如果已经设置过监听器,就不重复设置
86
+ if (editor.getAttribute('data-listener-set') === 'true') {
87
+ console.info("Writing Assistant Editor already has listeners set.");
88
+ return;
89
+ }
90
+
91
+ const idToEventFilterList = [
92
+ ['btn-action-change-flow', (e) => e.shiftKey && e.key === 'Tab'],
93
+ ['btn-action-accept-flow', (e) => !e.shiftKey && e.key === 'Tab'],
94
+ ['btn-action-change-paragraph', (e) => e.shiftKey && e.key === 'Enter'],
95
+ ['btn-action-create-paragraph', (e) => (e.metaKey || e.ctrlKey) && e.key === 'Enter'],
96
+ ]
97
+ editor.addEventListener('keydown', (e) => {
98
+ for (const [buttonId, filterFn] of idToEventFilterList) {
99
+ if (filterFn(e)) {
100
+ e.preventDefault();
101
+ const button = document.getElementById(buttonId);
102
+ if (button) {
103
+ SpaceApp.toastInfo(`Writing Assistant: Triggered action for ${buttonId}`);
104
+ button.click();
105
+ } else {
106
+ SpaceApp.toastError(`Writing Assistant: Button with id ${buttonId} not found.`);
107
+ }
108
+ break; // Only trigger one action per keydown
109
+ }
110
+ }
111
+ });
112
+
113
+ // 对已经设置过监听器的 editor,不要重复设置
114
+ editor.setAttribute('data-listener-set', 'true');
115
+ }
116
+ }
117
+ };
118
+ })(),
119
+
120
+ registerEventListeners: function() {
121
+ const listeners = {
122
+ 'tabSelect.chat': () => SpaceApp.TextGeneratorTab.toggle(),
123
+ 'tabSelect.code': () => SpaceApp.WebGeneratorTab.toggle(),
124
+ 'tabSelect.writing': () => SpaceApp.WritingAssistantTab.toggle(),
125
+ };
126
+ for (const [event, handler] of Object.entries(listeners)) {
127
+ window.addEventListener(event, handler);
128
+ console.info(`Registered event listener for ${event}`);
129
+ }
130
+
131
+ // type_filter, handler
132
+ // Listen for messages {type: string, payload: object}
133
+ const messages = {
134
+ 'iframeError': (data) => {
135
+ // 1. Show visual toast
136
+ SpaceApp.toastError("Iframe Error Detected:", JSON.stringify(data));
137
+
138
+ // 2. Propagate to Backend via Gradio
139
+ const jsErrorChannel = document.querySelector('.js_error_channel textarea');
140
+ if (jsErrorChannel) {
141
+ // Format as string for backend
142
+ const errorMessage = data.message || JSON.stringify(data);
143
+ const timestamp = new Date().toLocaleTimeString();
144
+ jsErrorChannel.value = `[${timestamp}] ${errorMessage}\nStack: ${data.stack || 'N/A'}`;
145
+ jsErrorChannel.dispatchEvent(new Event('input', { bubbles: true }));
146
+ } else {
147
+ console.warn("Gradio channel '.js_error_channel' not found. Backend logging skipped.");
148
+ }
149
+ },
150
+ }
151
+ window.addEventListener('message', (event) => {
152
+ const {type, payload} = event.data || {};
153
+ if (type && messages[type]) {
154
+ messages[type](payload);
155
+ }
156
+ });
157
+
158
+ },
159
+
160
+ };
161
+
162
+ // Expose bootstrap globally so external scripts can call it after page load
163
+ window.bootApp = function () {
164
+ if (SpaceApp.hasBooted) {
165
+ console.warn("Space App has already booted. Ignoring duplicate boot call.");
166
+ return;
167
+ }
168
+ SpaceApp.hasBooted = true;
169
+ SpaceApp.toastInfo("Booting Space App...");
170
+ SpaceApp.registerEventListeners();
171
+ SpaceApp.TextGeneratorTab.init();
172
+ SpaceApp.WebGeneratorTab.init();
173
+ SpaceApp.WritingAssistantTab.init();
174
+ SpaceApp.toastInfo("Space App booted successfully.");
175
+ };
176
+
177
+ window.SpaceApp = SpaceApp;
178
+
179
+ console.info("Space App JS execution completed.");
180
+
181
+ })();
182
+ </script>
183
+ <script>
184
+ // 注册一个 appStart 事件监听器,在 Gradio app 启动时调用 bootApp
185
+ window.addEventListener('appStart', function () {
186
+ console.info("Gradio appStart event detected. Booting Space App...");
187
+ if (typeof window.bootApp === "function") {
188
+ window.bootApp();
189
+ } else {
190
+ console.error("bootApp function is not defined.");
191
+ }
192
+ });
193
+ </script>
194
+ <script>
195
+ (function() {
196
+
197
+ const ErrorPoller = {
198
+ lastErrorMessage: '',
199
+ startPolling: () => {
200
+
201
+ },
202
+ stopPolling: () => {
203
+
204
+ }
205
+ }
206
+
207
+ console.info('Iframe Error Poller Initialized');
208
+ let lastErrorMessage = '';
209
+
210
+ // Function to start polling for errors in the iframe
211
+ function startPolling() {
212
+
213
+ console.info('Starting to poll iframe for errors...');
214
+
215
+ setInterval(() => {
216
+ // Re-select the iframe on every interval tick because Gradio replaces it.
217
+ const previewIframe = document.querySelector('iframe[srcdoc]');
218
+ if (!previewIframe) return;
219
+
220
+ try {
221
+ const iframeDoc = previewIframe.contentWindow.document;
222
+ const errorContainer = iframeDoc.querySelector('#global-error');
223
+
224
+ const jsErrorChannel = document.querySelector('.js_error_channel textarea');
225
+ if (!jsErrorChannel) {
226
+ console.error("Gradio channel '.js_error_channel' not found.");
227
+ return;
228
+ }
229
+
230
+ if (errorContainer && errorContainer.style.display !== 'none') {
231
+ const currentErrorMessage = errorContainer.innerText;
232
+
233
+ // If the error message is new, send it to the backend
234
+ if (currentErrorMessage && currentErrorMessage !== lastErrorMessage) {
235
+ lastErrorMessage = currentErrorMessage;
236
+
237
+ // Set the value on the hidden Gradio textbox
238
+ jsErrorChannel.value = currentErrorMessage;
239
+
240
+ // Dispatch an 'input' event to notify Gradio of the change
241
+ jsErrorChannel.dispatchEvent(new Event('input', { bubbles: true }));
242
+ }
243
+ } else {
244
+ // If the error container is not visible or doesn't exist, and there was a previous error, clear it.
245
+ if (lastErrorMessage) {
246
+ lastErrorMessage = '';
247
+ jsErrorChannel.value = '';
248
+ jsErrorChannel.dispatchEvent(new Event('input', { bubbles: true }));
249
+ }
250
+ }
251
+ } catch (e) {
252
+ // This can happen due to cross-origin restrictions if the iframe src changes.
253
+ // We can ignore it for srcdoc iframes.
254
+ }
255
+ }, 1000);
256
+
257
+ }
258
+
259
+ // 在页面加载之后设置
260
+ document.addEventListener('readystatechange', (event) => {
261
+ if (document.readyState === 'complete') {
262
+ setTimeout(startPolling, 500);
263
+ }
264
+ });
265
+
266
+ console.info('Iframe Error Poller Script Loaded');
267
+ })();
268
+ </script>
269
+ <script>
270
+ // 我们当前存活在一个 iframe 里面。观察 document, window, window.parent。
271
+
272
+ console.log('document is', document);
273
+ console.log('window is ', window);
274
+ console.log('window parent is ', window.parent);
275
+
276
+ </script>
static/catch-error.js ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ (function() {
2
+
3
+ // Console 日志
4
+ console.info('全局异常捕获脚本已加载,开始监听错误...');
5
+
6
+ // 1. 初始化错误显示容器 (保留原有的 Visual Toast 功能)
7
+ const errorContainer = document.createElement('div');
8
+ errorContainer.id = 'global-error';
9
+ // 设置样式:全屏、黑色背景、红色文字、置顶
10
+ errorContainer.style.cssText = `
11
+ position: fixed;
12
+ top: 0;
13
+ left: 0;
14
+ width: 100vw;
15
+ height: 100vh;
16
+ background-color: rgba(0, 0, 0, 0.9);
17
+ color: #ff5555;
18
+ z-index: 999999;
19
+ overflow-y: auto;
20
+ padding: 20px;
21
+ font-family: 'Consolas', 'Monaco', monospace;
22
+ font-size: 14px;
23
+ white-space: pre-wrap;
24
+ display: none; /* 默认隐藏,有错误时显示 */
25
+ box-sizing: border-box;
26
+ `;
27
+
28
+ // 添加标题和关闭提示
29
+ const header = document.createElement('div');
30
+ header.style.borderBottom = '1px solid #555';
31
+ header.style.paddingBottom = '10px';
32
+ header.style.marginBottom = '10px';
33
+ header.innerHTML = '<h2 style="margin:0; color:#fff;">🚨 全局异常捕获监控</h2><small style="color:#aaa">点击页面任意处可临时关闭蒙层</small>';
34
+ errorContainer.appendChild(header);
35
+
36
+ // 点击关闭功能
37
+ errorContainer.addEventListener('click', () => {
38
+ errorContainer.style.display = 'none';
39
+ });
40
+
41
+ // 确保 DOM 加载后插入 body,或者如果 body 存在直接插入
42
+ if (document.body) {
43
+ document.body.appendChild(errorContainer);
44
+ } else {
45
+ window.addEventListener('DOMContentLoaded', () => document.body.appendChild(errorContainer));
46
+ }
47
+
48
+ // 2. 核心处理函数:既显示在屏幕上,也发送给父窗口
49
+ function handleError(type, details) {
50
+ // A. 显示在屏幕上 (Visual Toast)
51
+ logErrorToScreen(type, details);
52
+
53
+ // B. 发送给父窗口 (iframe 通信)
54
+ postErrorToParent(type, details);
55
+ }
56
+
57
+ // 显示在屏幕上的具体实现
58
+ function logErrorToScreen(type, details) {
59
+ // 显示蒙层
60
+ errorContainer.style.display = 'block';
61
+
62
+ const reportItem = document.createElement('div');
63
+ reportItem.style.marginBottom = '20px';
64
+ reportItem.style.borderBottom = '1px dashed #444';
65
+ reportItem.style.paddingBottom = '10px';
66
+
67
+ // 格式化时间
68
+ const time = new Date().toLocaleTimeString();
69
+
70
+ // 构建 HTML 内容
71
+ reportItem.innerHTML = `
72
+ <div style="color: #fff; font-weight: bold;">[${time}] <span style="background:#b00; padding:2px 5px; border-radius:3px;">${type}</span></div>
73
+ <div style="margin-top:5px; color: #ffaaaa;">${details.message || '无错误信息'}</div>
74
+ ${details.filename ? `<div style="color: #888;">Location: ${details.filename}:${details.lineno}:${details.colno}</div>` : ''}
75
+ ${details.stack ? `<pre style="color: #aaa; background: #111; padding: 10px; overflow-x: auto; margin-top:5px;">${details.stack}</pre>` : ''}
76
+ ${details.selector ? `<div style="color: #888;">Element: &lt;${details.selector}&gt; (src: ${details.src})</div>` : ''}
77
+ `;
78
+
79
+ // 插入到标题之后,内容的顶部
80
+ header.after(reportItem);
81
+ }
82
+
83
+ // 发送给父窗口的具体实现
84
+ function postErrorToParent(type, details) {
85
+ if (window.parent && window.parent !== window) {
86
+ const msg = {
87
+ type: 'iframeError',
88
+ payload: {
89
+ errorType: type,
90
+ message: details.message || 'Unknown Error',
91
+ stack: details.stack || '',
92
+ filename: details.filename || '',
93
+ lineno: details.lineno || 0,
94
+ colno: details.colno || 0,
95
+ selector: details.selector || '',
96
+ src: details.src || '',
97
+ href: window.location.href,
98
+ timestamp: new Date().toISOString()
99
+ }
100
+ };
101
+
102
+ // 使用 '*' 允许跨域传递,或者根据需要指定特定 origin
103
+ window.parent.postMessage(msg, '*');
104
+ console.info('[CatchError] Posted error to parent:', payload);
105
+ }
106
+ }
107
+
108
+ // 3. 监听器 A: 捕获 JS 运行时错误 + 资源加载错误 (img/script)
109
+ // 注意:第三个参数 true (useCapture) 是捕获资源错误的关键
110
+ window.addEventListener('error', (event) => {
111
+ // 情况 1: 资源加载错误 (img, script, link)
112
+ // 资源错误没有冒泡,但在捕获阶段可以拦截,且 target 是 DOM 元素
113
+ if (event.target && (event.target instanceof HTMLElement)) {
114
+ const target = event.target;
115
+ handleError('Resource Error', {
116
+ message: `资源加载失败 (${target.tagName})`,
117
+ selector: target.tagName.toLowerCase(),
118
+ src: target.src || target.href || 'unknown source',
119
+ stack: 'N/A (Network Error)'
120
+ });
121
+ }
122
+ // 情况 2: 普通 JS 运行时错误
123
+ else {
124
+ handleError('Runtime Error', {
125
+ message: event.message,
126
+ filename: event.filename,
127
+ lineno: event.lineno,
128
+ colno: event.colno,
129
+ stack: event.error ? event.error.stack : '无堆栈信息'
130
+ });
131
+ }
132
+ }, true); // useCapture = true
133
+
134
+ // 4. 监听器 B: 捕获未处理的 Promise Rejection
135
+ window.addEventListener('unhandledrejection', (event) => {
136
+ // 提取错误原因
137
+ let reason = event.reason;
138
+ let stack = '无堆栈信息';
139
+ let message = '';
140
+
141
+ if (reason instanceof Error) {
142
+ message = reason.message;
143
+ stack = reason.stack;
144
+ } else {
145
+ // 如果 reject 的不是 Error 对象(例如 reject("foo"))
146
+ message = typeof reason === 'object' ? JSON.stringify(reason) : String(reason);
147
+ }
148
+
149
+ handleError('Unhandled Promise', {
150
+ message: `Promise 被 Reject 且未被 Catch: ${message}`,
151
+ stack: stack
152
+ });
153
+ });
154
+
155
+ console.info('全局异常捕获脚本已初始化完成 (Visual + PostMessage)。');
156
+
157
+ })();
static/toastify.html ADDED
@@ -0,0 +1,535 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <style>
2
+ /*!
3
+ * Toastify js 1.12.0
4
+ * https://github.com/apvarun/toastify-js
5
+ * @license MIT licensed
6
+ *
7
+ * Copyright (C) 2018 Varun A P
8
+ */
9
+
10
+ .toastify {
11
+ padding: 12px 20px;
12
+ color: #ffffff;
13
+ display: inline-block;
14
+ box-shadow: 0 3px 6px -1px rgba(0, 0, 0, 0.12), 0 10px 36px -4px rgba(77, 96, 232, 0.3);
15
+ background: -webkit-linear-gradient(315deg, #73a5ff, #5477f5);
16
+ background: linear-gradient(135deg, #73a5ff, #5477f5);
17
+ position: fixed;
18
+ opacity: 0;
19
+ transition: all 0.4s cubic-bezier(0.215, 0.61, 0.355, 1);
20
+ border-radius: 2px;
21
+ cursor: pointer;
22
+ text-decoration: none;
23
+ max-width: calc(50% - 20px);
24
+ z-index: 2147483647;
25
+ }
26
+
27
+ .toastify.on {
28
+ opacity: 1;
29
+ }
30
+
31
+ .toast-close {
32
+ background: transparent;
33
+ border: 0;
34
+ color: white;
35
+ cursor: pointer;
36
+ font-family: inherit;
37
+ font-size: 1em;
38
+ opacity: 0.4;
39
+ padding: 0 5px;
40
+ }
41
+
42
+ .toastify-right {
43
+ right: 15px;
44
+ }
45
+
46
+ .toastify-left {
47
+ left: 15px;
48
+ }
49
+
50
+ .toastify-top {
51
+ top: -150px;
52
+ }
53
+
54
+ .toastify-bottom {
55
+ bottom: -150px;
56
+ }
57
+
58
+ .toastify-rounded {
59
+ border-radius: 25px;
60
+ }
61
+
62
+ .toastify-avatar {
63
+ width: 1.5em;
64
+ height: 1.5em;
65
+ margin: -7px 5px;
66
+ border-radius: 2px;
67
+ }
68
+
69
+ .toastify-center {
70
+ margin-left: auto;
71
+ margin-right: auto;
72
+ left: 0;
73
+ right: 0;
74
+ max-width: fit-content;
75
+ max-width: -moz-fit-content;
76
+ }
77
+
78
+ @media only screen and (max-width: 360px) {
79
+ .toastify-right, .toastify-left {
80
+ margin-left: auto;
81
+ margin-right: auto;
82
+ left: 0;
83
+ right: 0;
84
+ max-width: fit-content;
85
+ }
86
+ }
87
+ </style>
88
+
89
+ <script>
90
+ /*!
91
+ * Toastify js 1.12.0
92
+ * https://github.com/apvarun/toastify-js
93
+ * @license MIT licensed
94
+ *
95
+ * Copyright (C) 2018 Varun A P
96
+ */
97
+ (function(root, factory) {
98
+ if (typeof module === "object" && module.exports) {
99
+ module.exports = factory();
100
+ } else {
101
+ root.Toastify = factory();
102
+ }
103
+ })(this, function(global) {
104
+ // Object initialization
105
+ var Toastify = function(options) {
106
+ // Returning a new init object
107
+ return new Toastify.lib.init(options);
108
+ },
109
+ // Library version
110
+ version = "1.12.0";
111
+
112
+ // Set the default global options
113
+ Toastify.defaults = {
114
+ oldestFirst: true,
115
+ text: "Toastify is awesome!",
116
+ node: undefined,
117
+ duration: 3000,
118
+ selector: undefined,
119
+ callback: function () {
120
+ },
121
+ destination: undefined,
122
+ newWindow: false,
123
+ close: false,
124
+ gravity: "toastify-top",
125
+ positionLeft: false,
126
+ position: '',
127
+ backgroundColor: '',
128
+ avatar: "",
129
+ className: "",
130
+ stopOnFocus: true,
131
+ onClick: function () {
132
+ },
133
+ offset: {x: 0, y: 0},
134
+ escapeMarkup: true,
135
+ ariaLive: 'polite',
136
+ style: {background: ''}
137
+ };
138
+
139
+ // Defining the prototype of the object
140
+ Toastify.lib = Toastify.prototype = {
141
+ toastify: version,
142
+
143
+ constructor: Toastify,
144
+
145
+ // Initializing the object with required parameters
146
+ init: function(options) {
147
+ // Verifying and validating the input object
148
+ if (!options) {
149
+ options = {};
150
+ }
151
+
152
+ // Creating the options object
153
+ this.options = {};
154
+
155
+ this.toastElement = null;
156
+
157
+ // Validating the options
158
+ this.options.text = options.text || Toastify.defaults.text; // Display message
159
+ this.options.node = options.node || Toastify.defaults.node; // Display content as node
160
+ this.options.duration = options.duration === 0 ? 0 : options.duration || Toastify.defaults.duration; // Display duration
161
+ this.options.selector = options.selector || Toastify.defaults.selector; // Parent selector
162
+ this.options.callback = options.callback || Toastify.defaults.callback; // Callback after display
163
+ this.options.destination = options.destination || Toastify.defaults.destination; // On-click destination
164
+ this.options.newWindow = options.newWindow || Toastify.defaults.newWindow; // Open destination in new window
165
+ this.options.close = options.close || Toastify.defaults.close; // Show toast close icon
166
+ this.options.gravity = options.gravity === "bottom" ? "toastify-bottom" : Toastify.defaults.gravity; // toast position - top or bottom
167
+ this.options.positionLeft = options.positionLeft || Toastify.defaults.positionLeft; // toast position - left or right
168
+ this.options.position = options.position || Toastify.defaults.position; // toast position - left or right
169
+ this.options.backgroundColor = options.backgroundColor || Toastify.defaults.backgroundColor; // toast background color
170
+ this.options.avatar = options.avatar || Toastify.defaults.avatar; // img element src - url or a path
171
+ this.options.className = options.className || Toastify.defaults.className; // additional class names for the toast
172
+ this.options.stopOnFocus = options.stopOnFocus === undefined ? Toastify.defaults.stopOnFocus : options.stopOnFocus; // stop timeout on focus
173
+ this.options.onClick = options.onClick || Toastify.defaults.onClick; // Callback after click
174
+ this.options.offset = options.offset || Toastify.defaults.offset; // toast offset
175
+ this.options.escapeMarkup = options.escapeMarkup !== undefined ? options.escapeMarkup : Toastify.defaults.escapeMarkup;
176
+ this.options.ariaLive = options.ariaLive || Toastify.defaults.ariaLive;
177
+ this.options.style = options.style || Toastify.defaults.style;
178
+ if(options.backgroundColor) {
179
+ this.options.style.background = options.backgroundColor;
180
+ }
181
+
182
+ // Returning the current object for chaining functions
183
+ return this;
184
+ },
185
+
186
+ // Building the DOM element
187
+ buildToast: function() {
188
+ // Validating if the options are defined
189
+ if (!this.options) {
190
+ throw "Toastify is not initialized";
191
+ }
192
+
193
+ // Creating the DOM object
194
+ var divElement = document.createElement("div");
195
+ divElement.className = "toastify on " + this.options.className;
196
+
197
+ // Positioning toast to left or right or center
198
+ if (!!this.options.position) {
199
+ divElement.className += " toastify-" + this.options.position;
200
+ } else {
201
+ // To be depreciated in further versions
202
+ if (this.options.positionLeft === true) {
203
+ divElement.className += " toastify-left";
204
+ console.warn('Property `positionLeft` will be depreciated in further versions. Please use `position` instead.')
205
+ } else {
206
+ // Default position
207
+ divElement.className += " toastify-right";
208
+ }
209
+ }
210
+
211
+ // Assigning gravity of element
212
+ divElement.className += " " + this.options.gravity;
213
+
214
+ if (this.options.backgroundColor) {
215
+ // This is being deprecated in favor of using the style HTML DOM property
216
+ console.warn('DEPRECATION NOTICE: "backgroundColor" is being deprecated. Please use the "style.background" property.');
217
+ }
218
+
219
+ // Loop through our style object and apply styles to divElement
220
+ for (var property in this.options.style) {
221
+ divElement.style[property] = this.options.style[property];
222
+ }
223
+
224
+ // Announce the toast to screen readers
225
+ if (this.options.ariaLive) {
226
+ divElement.setAttribute('aria-live', this.options.ariaLive)
227
+ }
228
+
229
+ // Adding the toast message/node
230
+ if (this.options.node && this.options.node.nodeType === Node.ELEMENT_NODE) {
231
+ // If we have a valid node, we insert it
232
+ divElement.appendChild(this.options.node)
233
+ } else {
234
+ if (this.options.escapeMarkup) {
235
+ divElement.innerText = this.options.text;
236
+ } else {
237
+ divElement.innerHTML = this.options.text;
238
+ }
239
+
240
+ if (this.options.avatar !== "") {
241
+ var avatarElement = document.createElement("img");
242
+ avatarElement.src = this.options.avatar;
243
+
244
+ avatarElement.className = "toastify-avatar";
245
+
246
+ if (this.options.position == "left" || this.options.positionLeft === true) {
247
+ // Adding close icon on the left of content
248
+ divElement.appendChild(avatarElement);
249
+ } else {
250
+ // Adding close icon on the right of content
251
+ divElement.insertAdjacentElement("afterbegin", avatarElement);
252
+ }
253
+ }
254
+ }
255
+
256
+ // Adding a close icon to the toast
257
+ if (this.options.close === true) {
258
+ // Create a span for close element
259
+ var closeElement = document.createElement("button");
260
+ closeElement.type = "button";
261
+ closeElement.setAttribute("aria-label", "Close");
262
+ closeElement.className = "toast-close";
263
+ closeElement.innerHTML = "&#10006;";
264
+
265
+ // Triggering the removal of toast from DOM on close click
266
+ closeElement.addEventListener(
267
+ "click",
268
+ function(event) {
269
+ event.stopPropagation();
270
+ this.removeElement(this.toastElement);
271
+ window.clearTimeout(this.toastElement.timeOutValue);
272
+ }.bind(this)
273
+ );
274
+
275
+ //Calculating screen width
276
+ var width = window.innerWidth > 0 ? window.innerWidth : screen.width;
277
+
278
+ // Adding the close icon to the toast element
279
+ // Display on the right if screen width is less than or equal to 360px
280
+ if ((this.options.position == "left" || this.options.positionLeft === true) && width > 360) {
281
+ // Adding close icon on the left of content
282
+ divElement.insertAdjacentElement("afterbegin", closeElement);
283
+ } else {
284
+ // Adding close icon on the right of content
285
+ divElement.appendChild(closeElement);
286
+ }
287
+ }
288
+
289
+ // Clear timeout while toast is focused
290
+ if (this.options.stopOnFocus && this.options.duration > 0) {
291
+ var self = this;
292
+ // stop countdown
293
+ divElement.addEventListener(
294
+ "mouseover",
295
+ function(event) {
296
+ window.clearTimeout(divElement.timeOutValue);
297
+ }
298
+ )
299
+ // add back the timeout
300
+ divElement.addEventListener(
301
+ "mouseleave",
302
+ function() {
303
+ divElement.timeOutValue = window.setTimeout(
304
+ function() {
305
+ // Remove the toast from DOM
306
+ self.removeElement(divElement);
307
+ },
308
+ self.options.duration
309
+ )
310
+ }
311
+ )
312
+ }
313
+
314
+ // Adding an on-click destination path
315
+ if (typeof this.options.destination !== "undefined") {
316
+ divElement.addEventListener(
317
+ "click",
318
+ function(event) {
319
+ event.stopPropagation();
320
+ if (this.options.newWindow === true) {
321
+ window.open(this.options.destination, "_blank");
322
+ } else {
323
+ window.location = this.options.destination;
324
+ }
325
+ }.bind(this)
326
+ );
327
+ }
328
+
329
+ if (typeof this.options.onClick === "function" && typeof this.options.destination === "undefined") {
330
+ divElement.addEventListener(
331
+ "click",
332
+ function(event) {
333
+ event.stopPropagation();
334
+ this.options.onClick();
335
+ }.bind(this)
336
+ );
337
+ }
338
+
339
+ // Adding offset
340
+ if(typeof this.options.offset === "object") {
341
+
342
+ var x = getAxisOffsetAValue("x", this.options);
343
+ var y = getAxisOffsetAValue("y", this.options);
344
+
345
+ var xOffset = this.options.position == "left" ? x : "-" + x;
346
+ var yOffset = this.options.gravity == "toastify-top" ? y : "-" + y;
347
+
348
+ divElement.style.transform = "translate(" + xOffset + "," + yOffset + ")";
349
+
350
+ }
351
+
352
+ // Returning the generated element
353
+ return divElement;
354
+ },
355
+
356
+ // Displaying the toast
357
+ showToast: function() {
358
+ // Creating the DOM object for the toast
359
+ this.toastElement = this.buildToast();
360
+
361
+ // Getting the root element to with the toast needs to be added
362
+ var rootElement;
363
+ if (typeof this.options.selector === "string") {
364
+ rootElement = document.getElementById(this.options.selector);
365
+ } else if (this.options.selector instanceof HTMLElement || (typeof ShadowRoot !== 'undefined' && this.options.selector instanceof ShadowRoot)) {
366
+ rootElement = this.options.selector;
367
+ } else {
368
+ rootElement = document.body;
369
+ }
370
+
371
+ // Validating if root element is present in DOM
372
+ if (!rootElement) {
373
+ throw "Root element is not defined";
374
+ }
375
+
376
+ // Adding the DOM element
377
+ var elementToInsert = Toastify.defaults.oldestFirst ? rootElement.firstChild : rootElement.lastChild;
378
+ rootElement.insertBefore(this.toastElement, elementToInsert);
379
+
380
+ // Repositioning the toasts in case multiple toasts are present
381
+ Toastify.reposition();
382
+
383
+ if (this.options.duration > 0) {
384
+ this.toastElement.timeOutValue = window.setTimeout(
385
+ function() {
386
+ // Remove the toast from DOM
387
+ this.removeElement(this.toastElement);
388
+ }.bind(this),
389
+ this.options.duration
390
+ ); // Binding `this` for function invocation
391
+ }
392
+
393
+ // Supporting function chaining
394
+ return this;
395
+ },
396
+
397
+ hideToast: function() {
398
+ if (this.toastElement.timeOutValue) {
399
+ clearTimeout(this.toastElement.timeOutValue);
400
+ }
401
+ this.removeElement(this.toastElement);
402
+ },
403
+
404
+ // Removing the element from the DOM
405
+ removeElement: function(toastElement) {
406
+ // Hiding the element
407
+ // toastElement.classList.remove("on");
408
+ toastElement.className = toastElement.className.replace(" on", "");
409
+
410
+ // Removing the element from DOM after transition end
411
+ window.setTimeout(
412
+ function() {
413
+ // remove options node if any
414
+ if (this.options.node && this.options.node.parentNode) {
415
+ this.options.node.parentNode.removeChild(this.options.node);
416
+ }
417
+
418
+ // Remove the element from the DOM, only when the parent node was not removed before.
419
+ if (toastElement.parentNode) {
420
+ toastElement.parentNode.removeChild(toastElement);
421
+ }
422
+
423
+ // Calling the callback function
424
+ this.options.callback.call(toastElement);
425
+
426
+ // Repositioning the toasts again
427
+ Toastify.reposition();
428
+ }.bind(this),
429
+ 400
430
+ ); // Binding `this` for function invocation
431
+ },
432
+ };
433
+
434
+ // Positioning the toasts on the DOM
435
+ Toastify.reposition = function() {
436
+
437
+ // Top margins with gravity
438
+ var topLeftOffsetSize = {
439
+ top: 15,
440
+ bottom: 15,
441
+ };
442
+ var topRightOffsetSize = {
443
+ top: 15,
444
+ bottom: 15,
445
+ };
446
+ var offsetSize = {
447
+ top: 15,
448
+ bottom: 15,
449
+ };
450
+
451
+ // Get all toast messages on the DOM
452
+ var allToasts = document.getElementsByClassName("toastify");
453
+
454
+ var classUsed;
455
+
456
+ // Modifying the position of each toast element
457
+ for (var i = 0; i < allToasts.length; i++) {
458
+ // Getting the applied gravity
459
+ if (containsClass(allToasts[i], "toastify-top") === true) {
460
+ classUsed = "toastify-top";
461
+ } else {
462
+ classUsed = "toastify-bottom";
463
+ }
464
+
465
+ var height = allToasts[i].offsetHeight;
466
+ classUsed = classUsed.substr(9, classUsed.length-1)
467
+ // Spacing between toasts
468
+ var offset = 15;
469
+
470
+ var width = window.innerWidth > 0 ? window.innerWidth : screen.width;
471
+
472
+ // Show toast in center if screen with less than or equal to 360px
473
+ if (width <= 360) {
474
+ // Setting the position
475
+ allToasts[i].style[classUsed] = offsetSize[classUsed] + "px";
476
+
477
+ offsetSize[classUsed] += height + offset;
478
+ } else {
479
+ if (containsClass(allToasts[i], "toastify-left") === true) {
480
+ // Setting the position
481
+ allToasts[i].style[classUsed] = topLeftOffsetSize[classUsed] + "px";
482
+
483
+ topLeftOffsetSize[classUsed] += height + offset;
484
+ } else {
485
+ // Setting the position
486
+ allToasts[i].style[classUsed] = topRightOffsetSize[classUsed] + "px";
487
+
488
+ topRightOffsetSize[classUsed] += height + offset;
489
+ }
490
+ }
491
+ }
492
+
493
+ // Supporting function chaining
494
+ return this;
495
+ };
496
+
497
+ // Helper function to get offset.
498
+ function getAxisOffsetAValue(axis, options) {
499
+
500
+ if(options.offset[axis]) {
501
+ if(isNaN(options.offset[axis])) {
502
+ return options.offset[axis];
503
+ }
504
+ else {
505
+ return options.offset[axis] + 'px';
506
+ }
507
+ }
508
+
509
+ return '0px';
510
+
511
+ }
512
+
513
+ function containsClass(elem, yourClass) {
514
+ if (!elem || typeof yourClass !== "string") {
515
+ return false;
516
+ } else if (
517
+ elem.className &&
518
+ elem.className
519
+ .trim()
520
+ .split(/\s+/gi)
521
+ .indexOf(yourClass) > -1
522
+ ) {
523
+ return true;
524
+ } else {
525
+ return false;
526
+ }
527
+ }
528
+
529
+ // Setting up the prototype for the init object
530
+ Toastify.lib.init.prototype = Toastify.lib;
531
+
532
+ // Returning the Toastify function to be assigned to the window object/module
533
+ return Toastify;
534
+ });
535
+ </script>
tab_chat.py ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import uuid
3
+ from datetime import datetime
4
+ import pandas as pd
5
+ from model_handler import ModelHandler
6
+ from config import CHAT_MODEL_SPECS, LING_1T
7
+ from recommand_config import RECOMMENDED_INPUTS
8
+ from ui_components.model_selector import create_model_selector
9
+
10
+ def create_chat_tab():
11
+ model_handler = ModelHandler()
12
+
13
+ conversation_store = gr.BrowserState(default_value=[], storage_key="ling_conversation_history")
14
+ current_conversation_id = gr.BrowserState(default_value=None, storage_key="ling_current_conversation_id")
15
+
16
+ def get_history_df(history):
17
+ if not history:
18
+ return pd.DataFrame({'ID': [], 'Conversation': []})
19
+ df = pd.DataFrame(history)
20
+ return df[['id', 'title']].rename(columns={'id': 'ID', 'title': 'Conversation'})
21
+
22
+ def handle_new_chat(history):
23
+ conv_id = str(uuid.uuid4())
24
+ new_convo = {
25
+ "id": conv_id, "title": "New Conversation",
26
+ "messages": [], "timestamp": datetime.now().isoformat()
27
+ }
28
+ updated_history = [new_convo] + (history or [])
29
+ return (
30
+ conv_id,
31
+ updated_history,
32
+ [],
33
+ gr.update(value=get_history_df(updated_history))
34
+ )
35
+
36
+ def load_conversation_from_df(df: pd.DataFrame, evt: gr.SelectData, history):
37
+ if evt.index is None:
38
+ return None, []
39
+ selected_id = df.iloc[evt.index[0]]['ID']
40
+ for convo in history:
41
+ if convo["id"] == selected_id:
42
+ return selected_id, convo["messages"]
43
+ # Fallback to new chat if something goes wrong
44
+ return handle_new_chat(history)[0], handle_new_chat(history)[2]
45
+
46
+ with gr.Row(equal_height=False, elem_id="indicator-chat-tab"):
47
+ with gr.Column(scale=1):
48
+ new_chat_btn = gr.Button("➕ 新对话")
49
+ history_df = gr.DataFrame(
50
+ value=get_history_df(conversation_store.value),
51
+ headers=["ID", "对话记录"],
52
+ datatype=["str", "str"],
53
+ interactive=False,
54
+ visible=True,
55
+ column_widths=["0%", "99%"]
56
+ )
57
+
58
+ with gr.Column(scale=4):
59
+ chatbot = gr.Chatbot(height=500, type='messages')
60
+ with gr.Row():
61
+ textbox = gr.Textbox(placeholder="输入消息...", container=False, scale=7)
62
+ submit_btn = gr.Button("发送", scale=1)
63
+
64
+ gr.Markdown("### 推荐对话")
65
+ recommended_dataset = gr.Dataset(
66
+ components=[gr.Textbox(visible=False)],
67
+ samples=[[item["task"]] for item in RECOMMENDED_INPUTS],
68
+ label="推荐场景", headers=["选择一个场景试试"],
69
+ )
70
+
71
+ with gr.Column(scale=1):
72
+ model_dropdown, model_description_markdown = create_model_selector(
73
+ model_specs=CHAT_MODEL_SPECS,
74
+ default_model_constant=LING_1T
75
+ )
76
+
77
+ system_prompt_textbox = gr.Textbox(label="系统提示词", lines=5, placeholder="输入系统提示词...")
78
+ temperature_slider = gr.Slider(minimum=0, maximum=1.0, value=0.7, step=0.1, label="温度参数")
79
+
80
+ # --- Event Handlers --- #
81
+ # The change handler is now encapsulated within create_model_selector
82
+ def on_select_recommendation(evt: gr.SelectData, history):
83
+ selected_task = evt.value[0]
84
+ item = next((i for i in RECOMMENDED_INPUTS if i["task"] == selected_task), None)
85
+ if not item: return gr.update(), gr.update(), gr.update(), gr.update(), gr.update(), gr.update(), gr.update(), gr.update()
86
+
87
+ new_id, new_history, new_messages, history_df_update = handle_new_chat(history)
88
+
89
+ return (
90
+ new_id, new_history,
91
+ gr.update(value=item["model"]),
92
+ gr.update(value=item["system_prompt"]),
93
+ gr.update(value=item["temperature"]),
94
+ gr.update(value=item["user_message"]),
95
+ history_df_update,
96
+ new_messages
97
+ )
98
+
99
+ recommended_dataset.select(on_select_recommendation, inputs=[conversation_store], outputs=[current_conversation_id, conversation_store, model_dropdown, system_prompt_textbox, temperature_slider, textbox, history_df, chatbot], show_progress="none")
100
+
101
+ def chat_stream(conv_id, history, model_display_name, message, chat_history, system_prompt, temperature):
102
+ if not message:
103
+ yield chat_history
104
+ return
105
+ model_constant = next((k for k, v in CHAT_MODEL_SPECS.items() if v["display_name"] == model_display_name), LING_1T)
106
+ response_generator = model_handler.get_response(model_constant, message, chat_history, system_prompt, temperature)
107
+ for history_update in response_generator:
108
+ yield history_update
109
+
110
+ def on_chat_stream_complete(conv_id, history, final_chat_history):
111
+ current_convo = next((c for c in history if c["id"] == conv_id), None)
112
+ if not current_convo:
113
+ return history, gr.update()
114
+
115
+ if len(final_chat_history) > len(current_convo["messages"]) and current_convo["title"] == "New Conversation":
116
+ user_message = final_chat_history[-2]["content"] if len(final_chat_history) > 1 else final_chat_history[0]["content"]
117
+ current_convo["title"] = user_message[:50]
118
+
119
+ current_convo["messages"] = final_chat_history
120
+ current_convo["timestamp"] = datetime.now().isoformat()
121
+
122
+ history = sorted([c for c in history if c["id"] != conv_id] + [current_convo], key=lambda x: x["timestamp"], reverse=True)
123
+ return history, gr.update(value=get_history_df(history))
124
+
125
+ submit_btn.click(
126
+ chat_stream,
127
+ [current_conversation_id, conversation_store, model_dropdown, textbox, chatbot, system_prompt_textbox, temperature_slider],
128
+ [chatbot]
129
+ ).then(
130
+ on_chat_stream_complete,
131
+ [current_conversation_id, conversation_store, chatbot],
132
+ [conversation_store, history_df]
133
+ )
134
+ textbox.submit(
135
+ chat_stream,
136
+ [current_conversation_id, conversation_store, model_dropdown, textbox, chatbot, system_prompt_textbox, temperature_slider],
137
+ [chatbot]
138
+ ).then(
139
+ on_chat_stream_complete,
140
+ [current_conversation_id, conversation_store, chatbot],
141
+ [conversation_store, history_df]
142
+ )
143
+
144
+ new_chat_btn.click(handle_new_chat, inputs=[conversation_store], outputs=[current_conversation_id, conversation_store, chatbot, history_df])
145
+ history_df.select(load_conversation_from_df, inputs=[history_df, conversation_store], outputs=[current_conversation_id, chatbot])
146
+
147
+ return conversation_store, current_conversation_id, history_df, chatbot
tab_code.py ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import time
3
+ import logging
4
+ from model_handler import ModelHandler
5
+ from config import CHAT_MODEL_SPECS, LING_1T, CODE_FRAMEWORK_SPECS, STATIC_PAGE
6
+ from ui_components.model_selector import create_model_selector
7
+ from ui_components.code_framework_selector import create_code_framework_selector
8
+ from tab_code_prompts.html_system_prompt import get_html_system_prompt
9
+
10
+ # Configure logging
11
+ logger = logging.getLogger(__name__)
12
+
13
+ # Read the content of the JavaScript file for error catching
14
+ try:
15
+ with open("static/catch-error.js", "r", encoding="utf-8") as f:
16
+ CATCH_ERROR_JS_SCRIPT = f.read()
17
+ except FileNotFoundError:
18
+ logger.error("Error: static/catch-error.js not found. The error catching overlay will not work.")
19
+ CATCH_ERROR_JS_SCRIPT = ""
20
+
21
+
22
+ def get_spinner_html():
23
+ """Return HTML with a CSS spinner animation"""
24
+ return """
25
+ <div style="width: 100%; height: 600px; display: flex; justify-content: center; align-items: center; border: 1px solid #ddd; background-color: #f9f9f9;">
26
+ <div class="spinner"></div>
27
+ </div>
28
+ <style>
29
+ .spinner {
30
+ border: 4px solid rgba(0, 0, 0, 0.1);
31
+ width: 36px;
32
+ height: 36px;
33
+ border-radius: 50%;
34
+ border-left-color: #09f;
35
+ animation: spin 1s ease infinite;
36
+ }
37
+ @keyframes spin {
38
+ 0% { transform: rotate(0deg); }
39
+ 100% { transform: rotate(360deg); }
40
+ }
41
+ </style>
42
+ """
43
+
44
+ def generate_code(code_type, model_choice, user_prompt, chatbot_history):
45
+ """Generate code and provide a preview, updating a log stream chatbot."""
46
+ logger.info(f"--- [Code Generation] Start ---")
47
+ logger.info(f"Code Type: {code_type}, Model: {model_choice}, Prompt: '{user_prompt}'")
48
+
49
+ if not user_prompt:
50
+ chatbot_history.append({"role": "assistant", "content": "🚨 **错误**: 请输入提示词。"})
51
+ yield "", gr.update(value="<p>预览将在此处显示。</p>"), chatbot_history, gr.update()
52
+ return
53
+
54
+ chatbot_history.append({"role": "assistant", "content": "⏳ 开始生成代码..."})
55
+ yield "", gr.HTML(get_spinner_html()), chatbot_history, gr.update()
56
+
57
+ if user_prompt == "create an error":
58
+ error_code = f"""<h1>This will create an error</h1><script>{CATCH_ERROR_JS_SCRIPT}</script><script>nonExistentFunction();</script>""";
59
+ escaped_code = error_code.replace("'", "&apos;").replace('"', '&quot;')
60
+ final_preview_html = f"""
61
+ <div style="width: 100%; height: 600px; border: 1px solid #ddd; overflow: hidden; position: relative; background-color: #f9f9f9;">
62
+ <iframe srcdoc='{escaped_code}'
63
+ style="position: absolute; top: 0; left: 0; width: 200%; height: 200%; transform: scale(0.5); transform-origin: 0 0; border: none;">
64
+ </iframe>
65
+ </div>
66
+ """
67
+ chatbot_history.append({"role": "assistant", "content": "✅ **成功**: 已生成一个用于测试的错误页面。"})
68
+ yield error_code, gr.update(value=final_preview_html), chatbot_history, gr.Tabs(selected=0)
69
+ return
70
+
71
+ start_time = time.time()
72
+ model_handler = ModelHandler()
73
+
74
+ if code_type == "静态页面":
75
+ system_prompt = get_html_system_prompt()
76
+ full_code_with_think = ""
77
+ full_code_for_preview = ""
78
+ buffer = ""
79
+ is_thinking = False
80
+
81
+ for code_chunk in model_handler.generate_code(system_prompt, user_prompt, code_type, model_choice):
82
+ full_code_with_think += code_chunk
83
+ buffer += code_chunk
84
+
85
+ while True:
86
+ if is_thinking:
87
+ end_index = buffer.find("</think>")
88
+ if end_index != -1:
89
+ is_thinking = False
90
+ buffer = buffer[end_index + len("</think>"):]
91
+ else:
92
+ break
93
+ else:
94
+ start_index = buffer.find("<think>")
95
+ if start_index != -1:
96
+ part_to_add = buffer[:start_index]
97
+ full_code_for_preview += part_to_add
98
+ is_thinking = True
99
+ buffer = buffer[start_index:]
100
+ else:
101
+ full_code_for_preview += buffer
102
+ buffer = ""
103
+ break
104
+
105
+ elapsed_time = time.time() - start_time
106
+ generated_length = len(full_code_with_think)
107
+ speed = generated_length / elapsed_time if elapsed_time > 0 else 0
108
+
109
+ log_message = f"""
110
+ **⏳ 正在生成中...**
111
+ - **时间:** {elapsed_time:.2f}s
112
+ - **长度:** {generated_length} chars
113
+ - **速度:** {speed:.2f} char/s
114
+ """
115
+
116
+ if len(chatbot_history) > 0 and "正在生成中" in chatbot_history[-1]["content"]:
117
+ chatbot_history[-1] = {"role": "assistant", "content": log_message}
118
+ else:
119
+ chatbot_history.append({"role": "assistant", "content": log_message})
120
+
121
+ yield full_code_with_think, gr.update(), chatbot_history, gr.update()
122
+
123
+ escaped_code = full_code_for_preview.replace("'", "&apos;").replace('"', '&quot;')
124
+ final_preview_html = f"""
125
+ <div style="width: 100%; height: 600px; border: 1px solid #ddd; overflow: hidden; position: relative; background-color: #f9f9f9;">
126
+ <iframe srcdoc='{escaped_code}'
127
+ style="position: absolute; top: 0; left: 0; width: 200%; height: 200%; transform: scale(0.5); transform-origin: 0 0; border: none;">
128
+ </iframe>
129
+ </div>
130
+ """
131
+ chatbot_history.append({"role": "assistant", "content": "✅ **成功**: 代码生成完成!"})
132
+ yield full_code_with_think, gr.HTML(final_preview_html), chatbot_history, gr.Tabs(selected=0)
133
+ logger.info("Static page streaming finished.")
134
+
135
+ def refresh_preview(code_type, current_code, chatbot_history):
136
+ """Refresh the preview and add a log entry."""
137
+ logger.info(f"--- [Manual Refresh] Start ---")
138
+ logger.info(f"Code Type: {code_type}")
139
+
140
+ if code_type == "静态页面":
141
+ escaped_code = current_code.replace("'", "&apos;").replace('"', '&quot;')
142
+ final_preview_html = f"""
143
+ <div style="width: 100%; height: 600px; border: 1px solid #ddd; overflow: hidden; position: relative; background-color: #f9f9f9;">
144
+ <iframe srcdoc='{escaped_code}'
145
+ style="position: absolute; top: 0; left: 0; width: 200%; height: 200%; transform: scale(0.5); transform-origin: 0 0; border: none;">
146
+ </iframe>
147
+ </div>
148
+ """
149
+ chatbot_history.append({"role": "assistant", "content": "🔄 **状态**: 预览已手动刷新。"})
150
+ logger.info("Refreshed static page preview.")
151
+ return gr.HTML(final_preview_html), chatbot_history
152
+
153
+ chatbot_history.append({"role": "assistant", "content": "⚠️ **警告**: 未知的代码类型,无法刷新。"})
154
+ return gr.update(), chatbot_history
155
+
156
+ def toggle_fullscreen(is_fullscreen):
157
+ is_fullscreen = not is_fullscreen
158
+ new_button_text = "退出全屏" if is_fullscreen else "全屏预览"
159
+ panel_visibility = not is_fullscreen
160
+ return is_fullscreen, gr.update(value=new_button_text), gr.update(visible=panel_visibility)
161
+
162
+ def log_js_error(error_text, chatbot_history):
163
+ """Appends a JavaScript error received from the frontend to the log chatbot."""
164
+ if not error_text:
165
+ return chatbot_history
166
+
167
+ formatted_error = f"🚨 **在预览中发现运行时异常!**\n```\n{error_text}\n```"
168
+
169
+ # Check if the last message is the same error to prevent flooding
170
+ if chatbot_history and chatbot_history[-1]["content"] == formatted_error:
171
+ return chatbot_history
172
+
173
+ chatbot_history.append({"role": "assistant", "content": formatted_error})
174
+ return chatbot_history
175
+
176
+ def create_code_tab():
177
+ fullscreen_state = gr.State(False)
178
+
179
+ html_examples = [
180
+ "Write a hello world alert",
181
+ "Create a Canvas animation of continuous colorful fireworks blooming on a black background.",
182
+ "Generate a Canvas special effect with iridescent light streams.",
183
+ "create an error"
184
+ ]
185
+
186
+ with gr.Row(elem_id="indicator-code-tab"):
187
+ with gr.Column(scale=1) as left_panel:
188
+ with gr.Column(scale=1): # Settings Panel
189
+ code_framework_dropdown = create_code_framework_selector(
190
+ framework_specs=CODE_FRAMEWORK_SPECS,
191
+ default_framework_constant=STATIC_PAGE
192
+ )
193
+ model_choice_dropdown, model_description_markdown = create_model_selector(
194
+ model_specs=CHAT_MODEL_SPECS,
195
+ default_model_constant=LING_1T
196
+ )
197
+ prompt_input = gr.Textbox(lines=5, placeholder="例如:创建一个带标题和按钮的简单页面", label="提示词")
198
+ with gr.Column():
199
+ gr.Examples(examples=html_examples, inputs=prompt_input, label="✨ 试试这些酷炫的例子吧")
200
+ generate_button = gr.Button("生成代码", variant="primary")
201
+
202
+ with gr.Column(scale=4):
203
+ with gr.Tabs(elem_id="result_tabs") as result_tabs:
204
+ with gr.TabItem("实时预览", id=0):
205
+ with gr.Row():
206
+ gr.Markdown("### 实时预览")
207
+ fullscreen_button = gr.Button("全屏预览", scale=0)
208
+ preview_output = gr.HTML(value="<p>预览将在此处显示。</p>")
209
+ with gr.TabItem("生成的源代码", id=1):
210
+ gr.Markdown("### 生成的源代码")
211
+ code_output = gr.Code(language="html", label="生成的代码", interactive=True)
212
+ refresh_button = gr.Button("刷新预览")
213
+
214
+ with gr.Column(scale=1):
215
+ log_chatbot = gr.Chatbot(label="生成日志", height=300, type="messages")
216
+
217
+ js_error_channel = gr.Textbox(visible=True, elem_classes=["js_error_channel"], label="Debug Error Channel", interactive=False)
218
+
219
+ refresh_button.click(
220
+ fn=refresh_preview,
221
+ inputs=[code_framework_dropdown, code_output, log_chatbot],
222
+ outputs=[preview_output, log_chatbot]
223
+ )
224
+
225
+ generate_button.click(
226
+ fn=generate_code,
227
+ inputs=[code_framework_dropdown, model_choice_dropdown, prompt_input, log_chatbot],
228
+ outputs=[code_output, preview_output, log_chatbot, result_tabs]
229
+ )
230
+
231
+ fullscreen_button.click(
232
+ fn=toggle_fullscreen,
233
+ inputs=[fullscreen_state],
234
+ outputs=[fullscreen_state, fullscreen_button, left_panel]
235
+ )
236
+
237
+ js_error_channel.change(
238
+ fn=log_js_error,
239
+ inputs=[js_error_channel, log_chatbot],
240
+ outputs=log_chatbot
241
+ )
tab_code_prompts/__init__.py ADDED
File without changes
tab_code_prompts/gradio_system_prompt.py ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ def get_gradio_system_prompt():
2
+ """Returns the system prompt for generating Gradio applications."""
3
+ return """
4
+ You are an expert Gradio developer. Create a complete, runnable, single-file Gradio application based on the user's request.
5
+ The code must be self-contained in a single Python script.
6
+ IMPORTANT: The main Gradio app instance MUST be assigned to a variable named `demo`.
7
+ The script must end with the app launch command: `demo.launch()`.
8
+ Do not include any explanations, just the raw Python code inside a ```python block.
9
+ """
tab_code_prompts/html_system_prompt.py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ def get_html_system_prompt():
2
+ """Returns the system prompt for generating static HTML pages."""
3
+ return """
4
+ You are an expert front-end developer. Create a complete, modern, and responsive single HTML file based on the user's request.
5
+ The file must be self-contained, including all necessary HTML, CSS, and JavaScript.
6
+ Do not include any explanations, just the raw HTML code.
7
+ """
tab_smart_writer.py ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import time
3
+ import random
4
+
5
+ # --- Mock Data ---
6
+
7
+ MOCK_STYLE = """风格:赛博朋克 / 黑色电影
8
+ 视角:第三人称限制视角(主角:凯)
9
+ 基调:阴郁、压抑、霓虹闪烁的高科技低生活
10
+ 核心规则:
11
+ 1. 强调感官描写,特别是光影和声音。
12
+ 2. 避免过多的心理独白,通过行动展现心理。
13
+ """
14
+
15
+ MOCK_KNOWLEDGE_BASE = [
16
+ ["凯 (Kai)", "主角,前黑客,现在是义体医生。左臂是老式的军用义体。"],
17
+ ["夜之城 (Night City)", "故事发生的舞台,一座永夜的巨型都市,被企业掌控。"],
18
+ ["荒坂塔 (Arasaka Tower)", "市中心的最高建筑,象征着绝对的权力。"],
19
+ ["赛博精神病 (Cyberpsychosis)", "过度改装义体导致的解离性精神障碍。"],
20
+ ["网络监察 (NetWatch)", "负责维护网络安全的组织,被黑客们视为走狗。"]
21
+ ]
22
+
23
+ MOCK_SHORT_TERM_OUTLINE = [
24
+ [True, "凯接到一个神秘电话,对方声称知道他失踪妹妹的下落。"],
25
+ [False, "凯前往'来生'酒吧与接头人见面。"],
26
+ [False, "在酒吧遇到旧识,引发一场关于过去的争执。"],
27
+ [False, "接头人出现,但似乎被跟踪了。"]
28
+ ]
29
+
30
+ MOCK_LONG_TERM_OUTLINE = [
31
+ [False, "揭露夜之城背后的惊天阴谋。"],
32
+ [False, "凯找回妹妹,或者接受她已经改变的事实。"],
33
+ [False, "与荒坂公司的最终决战。"]
34
+ ]
35
+
36
+ MOCK_INSPIRATIONS = [
37
+ "霓虹灯光在雨后的路面上破碎成无数光斑,凯拉紧了风衣的领口,义体手臂在寒风中隐隐作痛。来生酒吧的招牌在雾气中若隐若现,像是一只在黑暗中窥视的电子眼。",
38
+ "\"你来晚了。\"接头人的声音经过变声器处理,听起来像是指甲划过玻璃。他坐在阴影里,只有指尖的一点红光在闪烁——那是他正在抽的廉价合成烟。",
39
+ "突如其来的爆炸声震碎了酒吧的玻璃,人群尖叫着四散奔逃。凯本能地拔出了腰间的动能手枪,他的视觉系统瞬间切换到了战斗模式,周围的一切都变成了数据流。"
40
+ ]
41
+
42
+ MOCK_FLOW_SUGGESTIONS = [
43
+ "他感觉到了...",
44
+ "空气中弥漫着...",
45
+ "那是他从未见过的...",
46
+ "就在这一瞬间..."
47
+ ]
48
+
49
+ # --- Logic Functions ---
50
+
51
+ def get_stats(text):
52
+ """Mock word count and read time."""
53
+ if not text:
54
+ return "0 Words | 0 mins"
55
+ words = len(text)
56
+ read_time = max(1, words // 500)
57
+ return f"{words} Words | ~{read_time} mins"
58
+
59
+ def fetch_inspiration(prompt):
60
+ """Simulate fetching inspiration options based on user prompt."""
61
+ time.sleep(1)
62
+
63
+ # Simple Mock Logic based on prompt keywords
64
+ if prompt and "打斗" in prompt:
65
+ opts = [
66
+ "凯侧身闪过那一记重拳,义体关节发出尖锐的摩擦声。他顺势抓住对方的手腕,电流顺着接触点瞬间爆发。",
67
+ "激光刃切开空气,留下一道灼热的残影。凯没有退缩,他的视觉系统已经计算出了对方唯一的破绽。",
68
+ "周围的空气仿佛凝固了,只剩下心跳声和能量枪充能的嗡嗡声。谁先动,谁就会死。"
69
+ ]
70
+ elif prompt and "风景" in prompt:
71
+ opts = [
72
+ "酸雨冲刷着生锈的金属外墙,流下一道道黑色的泪痕。远处的全息广告牌在雨雾中显得格外刺眼。",
73
+ "清晨的阳光穿透厚重的雾霾,无力地洒在贫民窟的屋顶上。这里没有希望,只有生存。",
74
+ "夜之城的地下就像是一个巨大的迷宫,管道交错,蒸汽弥漫,老鼠和瘾君子在阴影中通过眼神交流。"
75
+ ]
76
+ else:
77
+ opts = MOCK_INSPIRATIONS
78
+
79
+ return gr.update(visible=True), opts[0], opts[1], opts[2]
80
+
81
+ def apply_inspiration(current_text, inspiration_text):
82
+ """Append selected inspiration to the editor."""
83
+ if not current_text:
84
+ new_text = inspiration_text
85
+ else:
86
+ new_text = current_text + "\n\n" + inspiration_text
87
+ return new_text, gr.update(visible=False), "" # Clear prompt
88
+
89
+ def dismiss_inspiration():
90
+ return gr.update(visible=False)
91
+
92
+ def fetch_flow_suggestion(current_text):
93
+ """Simulate fetching a short continuation."""
94
+ # If text ends with newline, maybe don't suggest? Or suggest new paragraph start.
95
+ time.sleep(0.5)
96
+ return random.choice(MOCK_FLOW_SUGGESTIONS)
97
+
98
+ def accept_flow_suggestion(current_text, suggestion):
99
+ if not suggestion or "等待输入" in suggestion:
100
+ return current_text
101
+ return current_text + suggestion
102
+
103
+ def refresh_context(current_outline):
104
+ """Mock refreshing the outline context (auto-complete task or add new one)."""
105
+ new_outline = [row[:] for row in current_outline]
106
+
107
+ # Try to complete the first pending task
108
+ task_completed = False
109
+ for row in new_outline:
110
+ if not row[0]:
111
+ row[0] = True
112
+ task_completed = True
113
+ break
114
+
115
+ # If all done, or randomly, add a new event
116
+ if not task_completed or random.random() > 0.7:
117
+ new_outline.append([False, f"新的动态事件: 突发情况 #{random.randint(100, 999)}"])
118
+
119
+ return new_outline
120
+
121
+ # --- UI Construction ---
122
+
123
+ def create_smart_writer_tab():
124
+ # Hidden Buttons for JS triggers
125
+ btn_accept_flow_trigger = gr.Button(visible=False, elem_id="btn_accept_flow_trigger")
126
+ btn_refresh_context_trigger = gr.Button(visible=False, elem_id="btn_refresh_context_trigger")
127
+
128
+ with gr.Row(equal_height=False, elem_id="indicator-writing-tab"):
129
+ # --- Left Column: Entity Console ---
130
+ with gr.Column(scale=0, min_width=384) as left_panel:
131
+ gr.Markdown("### 🧠 核心实体控制台")
132
+
133
+ with gr.Accordion("整体章程 (Style)", open=True):
134
+ style_input = gr.Textbox(
135
+ label="整体章程",
136
+ lines=8,
137
+ value=MOCK_STYLE,
138
+ interactive=True
139
+ )
140
+
141
+ with gr.Accordion("知识库 (Knowledge Base)", open=True):
142
+ kb_input = gr.Dataframe(
143
+ headers=["Term", "Description"],
144
+ datatype=["str", "str"],
145
+ value=MOCK_KNOWLEDGE_BASE,
146
+ interactive=True,
147
+ label="知识库",
148
+ wrap=True
149
+ )
150
+
151
+ with gr.Accordion("当前章节大纲 (Short-Term)", open=True):
152
+ short_outline_input = gr.Dataframe(
153
+ headers=["Done", "Task"],
154
+ datatype=["bool", "str"],
155
+ value=MOCK_SHORT_TERM_OUTLINE,
156
+ interactive=True,
157
+ label="当前章节大纲",
158
+ col_count=(2, "fixed"),
159
+ )
160
+
161
+ with gr.Accordion("故事总纲 (Long-Term)", open=False):
162
+ long_outline_input = gr.Dataframe(
163
+ headers=["Done", "Task"],
164
+ datatype=["bool", "str"],
165
+ value=MOCK_LONG_TERM_OUTLINE,
166
+ interactive=True,
167
+ label="故事总纲",
168
+ col_count=(2, "fixed"),
169
+ )
170
+
171
+ # --- Right Column: Writing Canvas ---
172
+ with gr.Column(scale=1) as right_panel:
173
+ # Toolbar
174
+ with gr.Row(elem_classes=["toolbar"]):
175
+ stats_display = gr.Markdown("0 Words | 0 mins")
176
+ inspiration_btn = gr.Button("✨ 灵感扩写 (Cmd+Enter)", size="sm", variant="primary")
177
+
178
+ # 主要编辑器区域
179
+ editor = gr.Textbox(
180
+ label="沉浸写作画布",
181
+ placeholder="开始你的创作...",
182
+ lines=30,
183
+ elem_classes=["writing-editor"],
184
+ elem_id="writing-editor",
185
+ show_label=False,
186
+ )
187
+
188
+ # Flow Suggestion
189
+ with gr.Row(variant="panel"):
190
+ flow_suggestion_display = gr.Textbox(
191
+ label="AI 实时续写建议 (按 Tab 采纳)",
192
+ value="(等待输入...)",
193
+ interactive=False,
194
+ scale=4,
195
+ elem_classes=["flow-suggestion-box"]
196
+ )
197
+ accept_flow_btn = gr.Button("采纳", scale=1, elem_id='btn-action-accept-flow')
198
+ refresh_flow_btn = gr.Button("换一个", scale=1)
199
+
200
+ # Inspiration Modal
201
+ with gr.Group(visible=False) as inspiration_modal:
202
+ gr.Markdown("### 💡 灵感选项 (由 Ling 模型生成)")
203
+
204
+ inspiration_prompt_input = gr.Textbox(
205
+ label="设定脉络 (可选)",
206
+ placeholder="例如:写一段激烈的打斗 / 描写赛博朋克夜景...",
207
+ lines=1
208
+ )
209
+ refresh_inspiration_btn = gr.Button("生成选项")
210
+
211
+ with gr.Row():
212
+ opt1_btn = gr.Button(MOCK_INSPIRATIONS[0], elem_classes=["inspiration-card"])
213
+ opt2_btn = gr.Button(MOCK_INSPIRATIONS[1], elem_classes=["inspiration-card"])
214
+ opt3_btn = gr.Button(MOCK_INSPIRATIONS[2], elem_classes=["inspiration-card"])
215
+ cancel_insp_btn = gr.Button("取消")
216
+
217
+ # --- Interactions ---
218
+
219
+ # 1. Stats
220
+ editor.change(fn=get_stats, inputs=editor, outputs=stats_display)
221
+
222
+ # 2. Inspiration Workflow
223
+ # Open Modal (reset prompt)
224
+ inspiration_btn.click(
225
+ fn=lambda: (gr.update(visible=True), ""),
226
+ outputs=[inspiration_modal, inspiration_prompt_input]
227
+ )
228
+
229
+ # Generate Options based on Prompt
230
+ refresh_inspiration_btn.click(
231
+ fn=fetch_inspiration,
232
+ inputs=[inspiration_prompt_input],
233
+ outputs=[inspiration_modal, opt1_btn, opt2_btn, opt3_btn]
234
+ )
235
+
236
+ # Apply Option
237
+ for btn in [opt1_btn, opt2_btn, opt3_btn]:
238
+ btn.click(
239
+ fn=apply_inspiration,
240
+ inputs=[editor, btn],
241
+ outputs=[editor, inspiration_modal, inspiration_prompt_input]
242
+ )
243
+
244
+ cancel_insp_btn.click(fn=dismiss_inspiration, outputs=inspiration_modal)
245
+
246
+ # 3. Flow Suggestion
247
+ editor.change(fn=fetch_flow_suggestion, inputs=editor, outputs=flow_suggestion_display)
248
+ refresh_flow_btn.click(fn=fetch_flow_suggestion, inputs=editor, outputs=flow_suggestion_display)
249
+
250
+ # Accept Flow (Triggered by Button or Tab Key via JS)
251
+ accept_flow_fn_inputs = [editor, flow_suggestion_display]
252
+ accept_flow_fn_outputs = [editor]
253
+
254
+ accept_flow_btn.click(fn=accept_flow_suggestion, inputs=accept_flow_fn_inputs, outputs=accept_flow_fn_outputs)
255
+ btn_accept_flow_trigger.click(fn=accept_flow_suggestion, inputs=accept_flow_fn_inputs, outputs=accept_flow_fn_outputs)
256
+
257
+ # 4. Context Refresh (Triggered by Enter Key via JS)
258
+ btn_refresh_context_trigger.click(
259
+ fn=refresh_context,
260
+ inputs=[short_outline_input],
261
+ outputs=[short_outline_input]
262
+ )
tab_test.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+
3
+ def run_model_handler_test():
4
+ """
5
+ Simulates a test for the ModelHandler's get_response method.
6
+ """
7
+ log = []
8
+ log.append("Running ModelHandler test...")
9
+ # Simulate some test logic
10
+ time.sleep(1)
11
+ log.append("ModelHandler test passed: Placeholder response received.")
12
+ print("\n".join(log)) # Also print to stdio
13
+ return "\n".join(log)
14
+
15
+ def run_clear_chat_test():
16
+ """
17
+ Simulates a test for the clear_chat functionality.
18
+ """
19
+ log = []
20
+ log.append("Running clear_chat test...")
21
+ # Simulate some test logic
22
+ time.sleep(1)
23
+ log.append("clear_chat test passed: Chatbot and textbox cleared.")
24
+ print("\n".join(log)) # Also print to stdio
25
+ return "\n".join(log)
26
+
27
+ # You can add more test functions here
ui_components/code_framework_selector.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+
3
+ def create_code_framework_selector(framework_specs, default_framework_constant):
4
+ """
5
+ Creates a reusable Gradio code framework selector component.
6
+
7
+ Args:
8
+ framework_specs (dict): A dictionary containing the specifications for each framework.
9
+ default_framework_constant (str): The key for the default framework in the framework_specs dictionary.
10
+
11
+ Returns:
12
+ tuple: A tuple containing the framework dropdown and the framework description markdown components.
13
+ """
14
+ display_names = [d["display_name"] for d in framework_specs.values()]
15
+ default_display_name = framework_specs[default_framework_constant]["display_name"]
16
+
17
+ framework_dropdown = gr.Dropdown(
18
+ choices=display_names,
19
+ label="代码类型",
20
+ value=default_display_name,
21
+ interactive=True
22
+ )
23
+
24
+ return framework_dropdown
ui_components/model_selector.py ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+
3
+ def create_model_selector(model_specs, default_model_constant):
4
+ """
5
+ Creates a reusable Gradio model selector component.
6
+
7
+ Args:
8
+ model_specs (dict): A dictionary containing the specifications for each model.
9
+ default_model_constant (str): The key for the default model in the model_specs dictionary.
10
+
11
+ Returns:
12
+ tuple: A tuple containing the model dropdown and the model description markdown components.
13
+ """
14
+ display_names = [d["display_name"] for d in model_specs.values()]
15
+ default_display_name = model_specs[default_model_constant]["display_name"]
16
+
17
+
18
+ model_dropdown = gr.Dropdown(
19
+ choices=display_names,
20
+ label="模型选择",
21
+ value=default_display_name,
22
+ interactive=True
23
+ )
24
+
25
+ def get_model_description(model_display_name):
26
+ for model_spec in model_specs.values():
27
+ if model_spec["display_name"] == model_display_name:
28
+ return model_spec["description"]
29
+ return ""
30
+
31
+ model_description_markdown = gr.Markdown(get_model_description(default_display_name),
32
+ container=True)
33
+
34
+ model_dropdown.change(
35
+ fn=get_model_description,
36
+ inputs=[model_dropdown],
37
+ outputs=[model_description_markdown]
38
+ )
39
+
40
+ return model_dropdown, model_description_markdown
utils.py ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ import socket
2
+
3
+ def find_free_port():
4
+ """
5
+ Finds a free port on the local machine.
6
+ """
7
+ with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
8
+ s.bind(("", 0))
9
+ return s.getsockname()[1]