Joseph Pollack commited on
Commit
12b7aab
·
unverified ·
1 Parent(s): 35d9120

adds interface fixes, sidebar settings , oauth fixes , more graphs , the determinator , and more

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ license: mit
16
  tags:
17
  - mcp-in-action-track-enterprise
18
  - mcp-hackathon
19
- - drug-repurposing
20
  - biomedical-ai
21
  - pydantic-ai
22
  - llamaindex
@@ -41,11 +41,13 @@ tags:
41
 
42
  </div>
43
 
44
- # DeepCritical
45
 
46
  ## About
47
 
48
- The [Deep Critical Gradio Hackathon Team](### Team) met online in the Alzheimer's Critical Literature Review Group in the Hugging Science initiative. We're building the agent framework we want to use for ai assisted research to [turn the vast amounts of clinical data into cures](https://github.com/DeepCritical/GradioDemo).
 
 
49
 
50
  For this hackathon we're proposing a simple yet powerful Deep Research Agent that iteratively looks for the answer until it finds it using general purpose websearch and special purpose retrievers for technical retrievers.
51
 
@@ -73,7 +75,7 @@ For this hackathon we're proposing a simple yet powerful Deep Research Agent tha
73
  - [] Apply Deep Research Systems To Generate Short Form Video (up to 5 minutes)
74
  - [] Visualize Pydantic Graphs as Loading Screens in the UI
75
  - [] Improve Data Science with more Complex Graph Agents
76
- - [] Create Deep Critical Drug Reporposing / Discovery Demo
77
  - [] Create Deep Critical Literal Review
78
  - [] Create Deep Critical Hypothesis Generator
79
  - [] Create PyPi Package
 
16
  tags:
17
  - mcp-in-action-track-enterprise
18
  - mcp-hackathon
19
+ - deep-research
20
  - biomedical-ai
21
  - pydantic-ai
22
  - llamaindex
 
41
 
42
  </div>
43
 
44
+ # The DETERMINATOR
45
 
46
  ## About
47
 
48
+ The DETERMINATOR is a deep research agent system designed to assist with complex research questions requiring thorough investigation. Originally developed by the Deep Critical Gradio Hackathon Team, The DETERMINATOR specializes in medical research inquiry, functioning as a medical peer junior researcher that helps gather, evaluate, and synthesize evidence from multiple sources.
49
+
50
+ **Important**: The DETERMINATOR is a research tool and cannot answer medical questions or provide medical advice. It assists researchers by finding and organizing evidence from biomedical literature and clinical trial databases.
51
 
52
  For this hackathon we're proposing a simple yet powerful Deep Research Agent that iteratively looks for the answer until it finds it using general purpose websearch and special purpose retrievers for technical retrievers.
53
 
 
75
  - [] Apply Deep Research Systems To Generate Short Form Video (up to 5 minutes)
76
  - [] Visualize Pydantic Graphs as Loading Screens in the UI
77
  - [] Improve Data Science with more Complex Graph Agents
78
+ - [] Create The DETERMINATOR Deep Research Demo
79
  - [] Create Deep Critical Literal Review
80
  - [] Create Deep Critical Hypothesis Generator
81
  - [] Create PyPi Package
docs/contributing.md CHANGED
@@ -1,6 +1,6 @@
1
- # Contributing to DeepCritical
2
 
3
- Thank you for your interest in contributing to DeepCritical! This guide will help you get started.
4
 
5
  ## Table of Contents
6
 
 
1
+ # Contributing to The DETERMINATOR
2
 
3
+ Thank you for your interest in contributing to The DETERMINATOR! This guide will help you get started.
4
 
5
  ## Table of Contents
6
 
docs/getting-started/examples.md CHANGED
@@ -1,6 +1,6 @@
1
  # Examples
2
 
3
- This page provides examples of using DeepCritical for various research tasks.
4
 
5
  ## Basic Research Query
6
 
@@ -11,7 +11,7 @@ This page provides examples of using DeepCritical for various research tasks.
11
  What are the latest treatments for Alzheimer's disease?
12
  ```
13
 
14
- **What DeepCritical Does**:
15
  1. Searches PubMed for recent papers
16
  2. Searches ClinicalTrials.gov for active trials
17
  3. Evaluates evidence quality
@@ -24,7 +24,7 @@ What are the latest treatments for Alzheimer's disease?
24
  What clinical trials are investigating metformin for cancer prevention?
25
  ```
26
 
27
- **What DeepCritical Does**:
28
  1. Searches ClinicalTrials.gov for relevant trials
29
  2. Searches PubMed for supporting literature
30
  3. Provides trial details and status
@@ -40,7 +40,7 @@ Review the evidence for using metformin as an anti-aging intervention,
40
  including clinical trials, mechanisms of action, and safety profile.
41
  ```
42
 
43
- **What DeepCritical Does**:
44
  1. Uses deep research mode (multi-section)
45
  2. Searches multiple sources in parallel
46
  3. Generates sections on:
@@ -56,7 +56,7 @@ including clinical trials, mechanisms of action, and safety profile.
56
  Test the hypothesis that regular exercise reduces Alzheimer's disease risk.
57
  ```
58
 
59
- **What DeepCritical Does**:
60
  1. Generates testable hypotheses
61
  2. Searches for supporting/contradicting evidence
62
  3. Performs statistical analysis (if Modal configured)
 
1
  # Examples
2
 
3
+ This page provides examples of using The DETERMINATOR for various research tasks.
4
 
5
  ## Basic Research Query
6
 
 
11
  What are the latest treatments for Alzheimer's disease?
12
  ```
13
 
14
+ **What The DETERMINATOR Does**:
15
  1. Searches PubMed for recent papers
16
  2. Searches ClinicalTrials.gov for active trials
17
  3. Evaluates evidence quality
 
24
  What clinical trials are investigating metformin for cancer prevention?
25
  ```
26
 
27
+ **What The DETERMINATOR Does**:
28
  1. Searches ClinicalTrials.gov for relevant trials
29
  2. Searches PubMed for supporting literature
30
  3. Provides trial details and status
 
40
  including clinical trials, mechanisms of action, and safety profile.
41
  ```
42
 
43
+ **What The DETERMINATOR Does**:
44
  1. Uses deep research mode (multi-section)
45
  2. Searches multiple sources in parallel
46
  3. Generates sections on:
 
56
  Test the hypothesis that regular exercise reduces Alzheimer's disease risk.
57
  ```
58
 
59
+ **What The DETERMINATOR Does**:
60
  1. Generates testable hypotheses
61
  2. Searches for supporting/contradicting evidence
62
  3. Performs statistical analysis (if Modal configured)
docs/getting-started/mcp-integration.md CHANGED
@@ -1,10 +1,10 @@
1
  # MCP Integration
2
 
3
- DeepCritical exposes a Model Context Protocol (MCP) server, allowing you to use its search tools directly from Claude Desktop or other MCP clients.
4
 
5
  ## What is MCP?
6
 
7
- The Model Context Protocol (MCP) is a standard for connecting AI assistants to external tools and data sources. DeepCritical implements an MCP server that exposes its search capabilities as MCP tools.
8
 
9
  ## MCP Server URL
10
 
@@ -33,14 +33,14 @@ http://localhost:7860/gradio_api/mcp/
33
  ~/.config/Claude/claude_desktop_config.json
34
  ```
35
 
36
- ### 2. Add DeepCritical Server
37
 
38
  Edit `claude_desktop_config.json` and add:
39
 
40
  ```json
41
  {
42
  "mcpServers": {
43
- "deepcritical": {
44
  "url": "http://localhost:7860/gradio_api/mcp/"
45
  }
46
  }
@@ -53,7 +53,7 @@ Close and restart Claude Desktop for changes to take effect.
53
 
54
  ### 4. Verify Connection
55
 
56
- In Claude Desktop, you should see DeepCritical tools available:
57
  - `search_pubmed`
58
  - `search_clinical_trials`
59
  - `search_biorxiv`
 
1
  # MCP Integration
2
 
3
+ The DETERMINATOR exposes a Model Context Protocol (MCP) server, allowing you to use its search tools directly from Claude Desktop or other MCP clients.
4
 
5
  ## What is MCP?
6
 
7
+ The Model Context Protocol (MCP) is a standard for connecting AI assistants to external tools and data sources. The DETERMINATOR implements an MCP server that exposes its search capabilities as MCP tools.
8
 
9
  ## MCP Server URL
10
 
 
33
  ~/.config/Claude/claude_desktop_config.json
34
  ```
35
 
36
+ ### 2. Add The DETERMINATOR Server
37
 
38
  Edit `claude_desktop_config.json` and add:
39
 
40
  ```json
41
  {
42
  "mcpServers": {
43
+ "determinator": {
44
  "url": "http://localhost:7860/gradio_api/mcp/"
45
  }
46
  }
 
53
 
54
  ### 4. Verify Connection
55
 
56
+ In Claude Desktop, you should see The DETERMINATOR tools available:
57
  - `search_pubmed`
58
  - `search_clinical_trials`
59
  - `search_biorxiv`
docs/getting-started/quick-start.md CHANGED
@@ -1,6 +1,42 @@
1
- # Quick Start Guide
2
 
3
- Get up and running with DeepCritical in minutes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  ## Start the Application
6
 
@@ -99,7 +135,7 @@ What are the active clinical trials investigating Alzheimer's disease treatments
99
 
100
  ## Next Steps
101
 
102
- - Learn about [MCP Integration](mcp-integration.md) to use DeepCritical from Claude Desktop
103
  - Explore [Examples](examples.md) for more use cases
104
  - Read the [Configuration Guide](../configuration/index.md) for advanced settings
105
  - Check out the [Architecture Documentation](../architecture/graph-orchestration.md) to understand how it works
 
1
+ # Single Command Deploy
2
 
3
+ Deploy with docker instandly with a single command :
4
+
5
+ ```bash
6
+ docker run -it -p 7860:7860 --platform=linux/amd64 \
7
+ -e DB_KEY="YOUR_VALUE_HERE" \
8
+ -e SERP_API="YOUR_VALUE_HERE" \
9
+ -e INFERENCE_API="YOUR_VALUE_HERE" \
10
+ -e MODAL_TOKEN_ID="YOUR_VALUE_HERE" \
11
+ -e MODAL_TOKEN_SECRET="YOUR_VALUE_HERE" \
12
+ -e NCBI_API_KEY="YOUR_VALUE_HERE" \
13
+ -e SERPER_API_KEY="YOUR_VALUE_HERE" \
14
+ -e CHROMA_DB_PATH="./chroma_db" \
15
+ -e CHROMA_DB_HOST="localhost" \
16
+ -e CHROMA_DB_PORT="8000" \
17
+ -e RAG_COLLECTION_NAME="deepcritical_evidence" \
18
+ -e RAG_SIMILARITY_TOP_K="5" \
19
+ -e RAG_AUTO_INGEST="true" \
20
+ -e USE_GRAPH_EXECUTION="false" \
21
+ -e DEFAULT_TOKEN_LIMIT="100000" \
22
+ -e DEFAULT_TIME_LIMIT_MINUTES="10" \
23
+ -e DEFAULT_ITERATIONS_LIMIT="10" \
24
+ -e WEB_SEARCH_PROVIDER="duckduckgo" \
25
+ -e MAX_ITERATIONS="10" \
26
+ -e SEARCH_TIMEOUT="30" \
27
+ -e LOG_LEVEL="DEBUG" \
28
+ -e EMBEDDING_PROVIDER="local" \
29
+ -e OPENAI_EMBEDDING_MODEL="text-embedding-3-small" \
30
+ -e LOCAL_EMBEDDING_MODEL="BAAI/bge-small-en-v1.5" \
31
+ -e HUGGINGFACE_EMBEDDING_MODEL="sentence-transformers/all-MiniLM-L6-v2" \
32
+ -e HF_FALLBACK_MODELS="Qwen/Qwen3-Next-80B-A3B-Thinking,Qwen/Qwen3-Next-80B-A3B-Instruct,meta-llama/Llama-3.3-70B-Instruct,meta-llama/Llama-3.1-8B-Instruct,HuggingFaceH4/zephyr-7b-beta,Qwen/Qwen2-7B-Instruct" \
33
+ -e HUGGINGFACE_MODEL="Qwen/Qwen3-Next-80B-A3B-Thinking" \
34
+ registry.hf.space/dataquests-deepcritical:latest python src/app.py
35
+ ```
36
+
37
+ ## Quick start guide
38
+
39
+ Get up and running with The DETERMINATOR in minutes.
40
 
41
  ## Start the Application
42
 
 
135
 
136
  ## Next Steps
137
 
138
+ - Learn about [MCP Integration](mcp-integration.md) to use The DETERMINATOR from Claude Desktop
139
  - Explore [Examples](examples.md) for more use cases
140
  - Read the [Configuration Guide](../configuration/index.md) for advanced settings
141
  - Check out the [Architecture Documentation](../architecture/graph-orchestration.md) to understand how it works
docs/index.md CHANGED
@@ -1,8 +1,10 @@
1
- # DeepCritical
2
 
3
- **AI-Native Drug Repurposing Research Agent**
4
 
5
- DeepCritical is a deep research agent system that uses iterative search-and-judge loops to comprehensively answer research questions. The system supports multiple orchestration patterns, graph-based execution, parallel research workflows, and long-running task management with real-time streaming.
 
 
6
 
7
  ## Features
8
 
@@ -34,7 +36,7 @@ For detailed installation and setup instructions, see the [Getting Started Guide
34
 
35
  ## Architecture
36
 
37
- DeepCritical uses a Vertical Slice Architecture:
38
 
39
  1. **Search Slice**: Retrieving evidence from PubMed, ClinicalTrials.gov, and Europe PMC
40
  2. **Judge Slice**: Evaluating evidence quality using LLMs
 
1
+ # The DETERMINATOR
2
 
3
+ **Deep Research Agent for Medical Inquiry**
4
 
5
+ The DETERMINATOR is a deep research agent system that uses iterative search-and-judge loops to comprehensively investigate research questions. The system supports multiple orchestration patterns, graph-based execution, parallel research workflows, and long-running task management with real-time streaming.
6
+
7
+ **Important**: The DETERMINATOR functions as a medical peer junior researcher that assists with research by gathering and synthesizing evidence. It cannot answer medical questions or provide medical advice.
8
 
9
  ## Features
10
 
 
36
 
37
  ## Architecture
38
 
39
+ The DETERMINATOR uses a Vertical Slice Architecture:
40
 
41
  1. **Search Slice**: Retrieving evidence from PubMed, ClinicalTrials.gov, and Europe PMC
42
  2. **Judge Slice**: Evaluating evidence quality using LLMs
docs/overview/architecture.md CHANGED
@@ -1,6 +1,6 @@
1
  # Architecture Overview
2
 
3
- DeepCritical is a deep research agent system that uses iterative search-and-judge loops to comprehensively answer research questions. The system supports multiple orchestration patterns, graph-based execution, parallel research workflows, and long-running task management with real-time streaming.
4
 
5
  ## Core Architecture
6
 
 
1
  # Architecture Overview
2
 
3
+ The DETERMINATOR is a deep research agent system that uses iterative search-and-judge loops to comprehensively answer research questions. The system supports multiple orchestration patterns, graph-based execution, parallel research workflows, and long-running task management with real-time streaming.
4
 
5
  ## Core Architecture
6
 
docs/overview/features.md CHANGED
@@ -1,6 +1,6 @@
1
  # Features
2
 
3
- DeepCritical provides a comprehensive set of features for AI-assisted research:
4
 
5
  ## Core Features
6
 
@@ -14,7 +14,7 @@ DeepCritical provides a comprehensive set of features for AI-assisted research:
14
  ### MCP Integration
15
 
16
  - **Model Context Protocol**: Expose search tools via MCP server
17
- - **Claude Desktop**: Use DeepCritical tools directly from Claude Desktop
18
  - **MCP Clients**: Compatible with any MCP-compatible client
19
 
20
  ### Authentication
 
1
  # Features
2
 
3
+ The DETERMINATOR provides a comprehensive set of features for AI-assisted research:
4
 
5
  ## Core Features
6
 
 
14
  ### MCP Integration
15
 
16
  - **Model Context Protocol**: Expose search tools via MCP server
17
+ - **Claude Desktop**: Use The DETERMINATOR tools directly from Claude Desktop
18
  - **MCP Clients**: Compatible with any MCP-compatible client
19
 
20
  ### Authentication
examples/README.md CHANGED
@@ -1,8 +1,8 @@
1
- # DeepCritical Examples
2
 
3
  **NO MOCKS. NO FAKE DATA. REAL SCIENCE.**
4
 
5
- These demos run the REAL drug repurposing research pipeline with actual API calls.
6
 
7
  ---
8
 
@@ -181,4 +181,4 @@ Mocks belong in `tests/unit/`, not in demos. When you run these examples, you se
181
  - Real scientific hypotheses
182
  - Real research reports
183
 
184
- This is what DeepCritical actually does. No fake data. No canned responses.
 
1
+ # The DETERMINATOR Examples
2
 
3
  **NO MOCKS. NO FAKE DATA. REAL SCIENCE.**
4
 
5
+ These demos run the REAL deep research pipeline with actual API calls.
6
 
7
  ---
8
 
 
181
  - Real scientific hypotheses
182
  - Real research reports
183
 
184
+ This is what The DETERMINATOR actually does. No fake data. No canned responses.
examples/full_stack_demo/run_full.py CHANGED
@@ -1,8 +1,8 @@
1
  #!/usr/bin/env python3
2
  """
3
- Demo: Full Stack DeepCritical Agent (Phases 1-8).
4
 
5
- This script demonstrates the COMPLETE REAL drug repurposing research pipeline:
6
  - Phase 2: REAL Search (PubMed + ClinicalTrials + Europe PMC)
7
  - Phase 6: REAL Embeddings (sentence-transformers + ChromaDB)
8
  - Phase 7: REAL Hypothesis (LLM mechanistic reasoning)
 
1
  #!/usr/bin/env python3
2
  """
3
+ Demo: Full Stack DETERMINATOR Agent (Phases 1-8).
4
 
5
+ This script demonstrates the COMPLETE REAL deep research pipeline:
6
  - Phase 2: REAL Search (PubMed + ClinicalTrials + Europe PMC)
7
  - Phase 6: REAL Embeddings (sentence-transformers + ChromaDB)
8
  - Phase 7: REAL Hypothesis (LLM mechanistic reasoning)
examples/search_demo/run_search.py CHANGED
@@ -1,6 +1,6 @@
1
  #!/usr/bin/env python3
2
  """
3
- Demo: Search for drug repurposing evidence.
4
 
5
  This script demonstrates multi-source search functionality:
6
  - PubMed search (biomedical literature)
@@ -30,7 +30,7 @@ from src.tools.search_handler import SearchHandler
30
  async def main(query: str) -> None:
31
  """Run search demo with the given query."""
32
  print(f"\n{'=' * 60}")
33
- print("DeepCritical Search Demo")
34
  print(f"Query: {query}")
35
  print(f"{'=' * 60}\n")
36
 
@@ -61,7 +61,7 @@ async def main(query: str) -> None:
61
 
62
  if __name__ == "__main__":
63
  # Default query or use command line arg
64
- default_query = "metformin Alzheimer's disease drug repurposing"
65
  query = sys.argv[1] if len(sys.argv) > 1 else default_query
66
 
67
  asyncio.run(main(query))
 
1
  #!/usr/bin/env python3
2
  """
3
+ Demo: Search for biomedical research evidence.
4
 
5
  This script demonstrates multi-source search functionality:
6
  - PubMed search (biomedical literature)
 
30
  async def main(query: str) -> None:
31
  """Run search demo with the given query."""
32
  print(f"\n{'=' * 60}")
33
+ print("The DETERMINATOR Search Demo")
34
  print(f"Query: {query}")
35
  print(f"{'=' * 60}\n")
36
 
 
61
 
62
  if __name__ == "__main__":
63
  # Default query or use command line arg
64
+ default_query = "metformin Alzheimer's disease treatment mechanisms"
65
  query = sys.argv[1] if len(sys.argv) > 1 else default_query
66
 
67
  asyncio.run(main(query))
mkdocs.yml CHANGED
@@ -1,6 +1,6 @@
1
- site_name: DeepCritical
2
- site_description: AI-Native Drug Repurposing Research Agent
3
- site_author: DeepCritical Team
4
  site_url: https://deepcritical.github.io/GradioDemo/
5
 
6
  repo_name: DeepCritical/GradioDemo
 
1
+ site_name: The DETERMINATOR
2
+ site_description: Deep Research Agent for Medical Inquiry
3
+ site_author: The DETERMINATOR Team
4
  site_url: https://deepcritical.github.io/GradioDemo/
5
 
6
  repo_name: DeepCritical/GradioDemo
pyproject.toml CHANGED
@@ -1,7 +1,7 @@
1
  [project]
2
- name = "deepcritical"
3
  version = "0.1.0"
4
- description = "AI-Native Drug Repurposing Research Agent"
5
  readme = "README.md"
6
  requires-python = ">=3.11"
7
  dependencies = [
@@ -41,6 +41,7 @@ dependencies = [
41
  "numpy<2.0",
42
  "llama-index-llms-openai>=0.6.9",
43
  "llama-index-embeddings-openai>=0.5.1",
 
44
  ]
45
 
46
  [project.optional-dependencies]
 
1
  [project]
2
+ name = "determinator"
3
  version = "0.1.0"
4
+ description = "The DETERMINATOR - Deep Research Agent for Medical Inquiry"
5
  readme = "README.md"
6
  requires-python = ">=3.11"
7
  dependencies = [
 
41
  "numpy<2.0",
42
  "llama-index-llms-openai>=0.6.9",
43
  "llama-index-embeddings-openai>=0.5.1",
44
+ "ddgs>=9.9.2",
45
  ]
46
 
47
  [project.optional-dependencies]
requirements.txt CHANGED
@@ -33,8 +33,9 @@ limits>=3.0 # Rate limiting
33
  pydantic-graph>=1.22.0
34
 
35
  # Web search
36
- duckduckgo-search>=5.0
37
-
 
38
  # LlamaIndex RAG
39
  llama-index-llms-huggingface>=0.6.1
40
  llama-index-llms-huggingface-api>=0.6.1
 
33
  pydantic-graph>=1.22.0
34
 
35
  # Web search
36
+ ddgs>=9.9.2 # duckduckgo-search has been renamed to ddgs
37
+ fake-useragent==2.2.0
38
+ socksio==1.0.0
39
  # LlamaIndex RAG
40
  llama-index-llms-huggingface>=0.6.1
41
  llama-index-llms-huggingface-api>=0.6.1
src/agents/magentic_agents.py CHANGED
@@ -29,7 +29,7 @@ def create_search_agent(chat_client: Any | None = None) -> ChatAgent:
29
  name="SearchAgent",
30
  description=(
31
  "Searches biomedical databases (PubMed, ClinicalTrials.gov, Europe PMC) "
32
- "for drug repurposing evidence"
33
  ),
34
  instructions="""You are a biomedical search specialist. When asked to find evidence:
35
 
@@ -100,7 +100,7 @@ def create_hypothesis_agent(chat_client: Any | None = None) -> ChatAgent:
100
 
101
  return ChatAgent(
102
  name="HypothesisAgent",
103
- description="Generates mechanistic hypotheses for drug repurposing",
104
  instructions="""You are a biomedical hypothesis generator. Based on evidence:
105
 
106
  1. Identify the key molecular targets involved
 
29
  name="SearchAgent",
30
  description=(
31
  "Searches biomedical databases (PubMed, ClinicalTrials.gov, Europe PMC) "
32
+ "for research evidence"
33
  ),
34
  instructions="""You are a biomedical search specialist. When asked to find evidence:
35
 
 
100
 
101
  return ChatAgent(
102
  name="HypothesisAgent",
103
+ description="Generates mechanistic hypotheses for research investigation",
104
  instructions="""You are a biomedical hypothesis generator. Based on evidence:
105
 
106
  1. Identify the key molecular targets involved
src/agents/search_agent.py CHANGED
@@ -28,7 +28,7 @@ class SearchAgent(BaseAgent): # type: ignore[misc]
28
  ) -> None:
29
  super().__init__(
30
  name="SearchAgent",
31
- description="Searches PubMed for drug repurposing evidence",
32
  )
33
  self._handler = search_handler
34
  self._evidence_store = evidence_store
 
28
  ) -> None:
29
  super().__init__(
30
  name="SearchAgent",
31
+ description="Searches PubMed for biomedical research evidence",
32
  )
33
  self._handler = search_handler
34
  self._evidence_store = evidence_store
src/agents/tools.py CHANGED
@@ -80,7 +80,7 @@ async def search_clinical_trials(query: str, max_results: int = 10) -> str:
80
  """Search ClinicalTrials.gov for clinical studies.
81
 
82
  Use this tool to find ongoing and completed clinical trials
83
- for drug repurposing candidates.
84
 
85
  Args:
86
  query: Search terms (e.g., "metformin cancer phase 3")
 
80
  """Search ClinicalTrials.gov for clinical studies.
81
 
82
  Use this tool to find ongoing and completed clinical trials
83
+ for research investigation.
84
 
85
  Args:
86
  query: Search terms (e.g., "metformin cancer phase 3")
src/app.py CHANGED
@@ -1,4 +1,4 @@
1
- """Gradio UI for DeepCritical agent with MCP server support."""
2
 
3
  import os
4
  from collections.abc import AsyncGenerator
@@ -737,7 +737,7 @@ def create_demo() -> gr.Blocks:
737
  Returns:
738
  Configured Gradio Blocks interface with MCP server and OAuth enabled
739
  """
740
- with gr.Blocks(title="🧬 DeepCritical", fill_height=True) as demo:
741
  # Add sidebar with login button and information
742
  # Reference: Working implementation pattern from Gradio docs
743
  with gr.Sidebar():
@@ -750,96 +750,99 @@ def create_demo() -> gr.Blocks:
750
  gr.Markdown("---")
751
  gr.Markdown("### ℹ️ About") # noqa: RUF001
752
  gr.Markdown(
753
- "AI-Powered Drug Repurposing Agent that searches:\n"
 
754
  "- PubMed\n"
755
  "- ClinicalTrials.gov\n"
756
- "- Europe PMC"
 
757
  )
758
-
759
- # Create settings components
760
- # Note: ChatInterface doesn't support additional_inputs_accordion parameter in Gradio 6.0
761
- # Components are created outside accordion context to ensure they're accessible for additional_inputs
762
- mode_radio = gr.Radio(
763
- choices=["simple", "advanced", "iterative", "deep", "auto"],
764
- value="simple",
765
- label="Orchestrator Mode",
766
- info=(
767
- "Simple: Linear search-judge loop | "
768
- "Advanced: Multi-agent (OpenAI) | "
769
- "Iterative: Knowledge-gap driven | "
770
- "Deep: Parallel sections | "
771
- "Auto: Smart routing"
772
- ),
773
- )
774
-
775
- # Graph mode selection
776
- graph_mode_radio = gr.Radio(
777
- choices=["iterative", "deep", "auto"],
778
- value="auto",
779
- label="Graph Research Mode",
780
- info="Iterative: Single loop | Deep: Parallel sections | Auto: Detect from query",
781
- )
782
-
783
- # Graph execution toggle
784
- use_graph_checkbox = gr.Checkbox(
785
- value=True,
786
- label="Use Graph Execution",
787
- info="Enable graph-based workflow execution",
788
- )
789
-
790
- # TTS Configuration components
791
- # Note: These are created outside accordion to ensure accessibility for additional_inputs
792
- # The ChatInterface will display them, but grouping in accordion is not supported via additional_inputs_accordion
793
- tts_voice_dropdown = gr.Dropdown(
794
- choices=[
795
- "af_heart",
796
- "af_bella",
797
- "af_nicole",
798
- "af_aoede",
799
- "af_kore",
800
- "af_sarah",
801
- "af_nova",
802
- "af_sky",
803
- "af_alloy",
804
- "af_jessica",
805
- "af_river",
806
- "am_michael",
807
- "am_fenrir",
808
- "am_puck",
809
- "am_echo",
810
- "am_eric",
811
- "am_liam",
812
- "am_onyx",
813
- "am_santa",
814
- "am_adam",
815
- ],
816
- value=settings.tts_voice,
817
- label="TTS Voice",
818
- info="Select TTS voice (American English voices: af_*, am_*)",
819
- visible=settings.enable_audio_output,
820
- )
821
- tts_speed_slider = gr.Slider(
822
- minimum=0.5,
823
- maximum=2.0,
824
- value=settings.tts_speed,
825
- step=0.1,
826
- label="TTS Speech Speed",
827
- info="Adjust TTS speech speed (0.5x to 2.0x)",
828
- visible=settings.enable_audio_output,
829
- )
830
- tts_gpu_dropdown = gr.Dropdown(
831
- choices=["T4", "A10", "A100", "L4", "L40S"],
832
- value=settings.tts_gpu or "T4",
833
- label="TTS GPU Type",
834
- info="Modal GPU type for TTS (T4 is cheapest, A100 is fastest). Note: GPU changes require app restart.",
835
- visible=settings.modal_available and settings.enable_audio_output,
836
- interactive=False, # GPU type set at function definition time, requires restart
837
- )
838
- enable_audio_output_checkbox = gr.Checkbox(
839
- value=settings.enable_audio_output,
840
- label="Enable Audio Output",
841
- info="Generate audio responses using TTS",
842
- )
 
843
 
844
  # Hidden text components for model/provider (not dropdowns to avoid value mismatch)
845
  # These will be empty by default and use defaults in configure_orchestrator
@@ -861,6 +864,22 @@ def create_demo() -> gr.Blocks:
861
  label="🔊 Audio Response",
862
  visible=settings.enable_audio_output,
863
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
864
 
865
  # Chat interface with multimodal support
866
  # Examples are provided but will NOT run at startup (cache_examples=False)
@@ -868,12 +887,12 @@ def create_demo() -> gr.Blocks:
868
  gr.ChatInterface(
869
  fn=research_agent,
870
  multimodal=True, # Enable multimodal input (text + images + audio)
871
- title="🧬 DeepCritical",
872
  description=(
873
- "*AI-Powered Drug Repurposing Agent — searches PubMed, "
874
  "ClinicalTrials.gov & Europe PMC*\n\n"
875
  "---\n"
876
- "*Research tool only — not for medical advice.* \n"
877
  "**MCP Server Active**: Connect Claude Desktop to `/gradio_api/mcp/`\n\n"
878
  "**🎤 Multimodal Support**: Upload images (OCR), record audio (STT), or type text.\n\n"
879
  "**⚠️ Authentication Required**: Please **sign in with HuggingFace** above before using this application."
@@ -885,7 +904,7 @@ def create_demo() -> gr.Blocks:
885
  # Note: Provider is optional - if empty, HF will auto-select
886
  # These examples will NOT run at startup - users must click them after logging in
887
  [
888
- "What drugs could be repurposed for Alzheimer's disease?",
889
  "simple",
890
  "Qwen/Qwen3-Next-80B-A3B-Thinking",
891
  "",
 
1
+ """Gradio UI for The DETERMINATOR agent with MCP server support."""
2
 
3
  import os
4
  from collections.abc import AsyncGenerator
 
737
  Returns:
738
  Configured Gradio Blocks interface with MCP server and OAuth enabled
739
  """
740
+ with gr.Blocks(title="🔬 The DETERMINATOR", fill_height=True) as demo:
741
  # Add sidebar with login button and information
742
  # Reference: Working implementation pattern from Gradio docs
743
  with gr.Sidebar():
 
750
  gr.Markdown("---")
751
  gr.Markdown("### ℹ️ About") # noqa: RUF001
752
  gr.Markdown(
753
+ "**The DETERMINATOR** - Deep Research Agent for Medical Inquiry\n\n"
754
+ "Searches:\n"
755
  "- PubMed\n"
756
  "- ClinicalTrials.gov\n"
757
+ "- Europe PMC\n\n"
758
+ "⚠️ **Research tool only** - Cannot answer medical questions or provide medical advice."
759
  )
760
+ gr.Markdown("---")
761
+
762
+ # Settings Section - Organized in Accordions
763
+ gr.Markdown("## ⚙️ Settings")
764
+
765
+ # Research Configuration Accordion
766
+ with gr.Accordion("🔬 Research Configuration", open=True):
767
+ mode_radio = gr.Radio(
768
+ choices=["simple", "advanced", "iterative", "deep", "auto"],
769
+ value="simple",
770
+ label="Orchestrator Mode",
771
+ info=(
772
+ "Simple: Linear search-judge loop | "
773
+ "Advanced: Multi-agent (OpenAI) | "
774
+ "Iterative: Knowledge-gap driven | "
775
+ "Deep: Parallel sections | "
776
+ "Auto: Smart routing"
777
+ ),
778
+ )
779
+
780
+ graph_mode_radio = gr.Radio(
781
+ choices=["iterative", "deep", "auto"],
782
+ value="auto",
783
+ label="Graph Research Mode",
784
+ info="Iterative: Single loop | Deep: Parallel sections | Auto: Detect from query",
785
+ )
786
+
787
+ use_graph_checkbox = gr.Checkbox(
788
+ value=True,
789
+ label="Use Graph Execution",
790
+ info="Enable graph-based workflow execution",
791
+ )
792
+
793
+ # Audio/TTS Configuration Accordion
794
+ with gr.Accordion("🔊 Audio Output", open=False):
795
+ enable_audio_output_checkbox = gr.Checkbox(
796
+ value=settings.enable_audio_output,
797
+ label="Enable Audio Output",
798
+ info="Generate audio responses using TTS",
799
+ )
800
+
801
+ tts_voice_dropdown = gr.Dropdown(
802
+ choices=[
803
+ "af_heart",
804
+ "af_bella",
805
+ "af_nicole",
806
+ "af_aoede",
807
+ "af_kore",
808
+ "af_sarah",
809
+ "af_nova",
810
+ "af_sky",
811
+ "af_alloy",
812
+ "af_jessica",
813
+ "af_river",
814
+ "am_michael",
815
+ "am_fenrir",
816
+ "am_puck",
817
+ "am_echo",
818
+ "am_eric",
819
+ "am_liam",
820
+ "am_onyx",
821
+ "am_santa",
822
+ "am_adam",
823
+ ],
824
+ value=settings.tts_voice,
825
+ label="TTS Voice",
826
+ info="Select TTS voice (American English voices: af_*, am_*)",
827
+ )
828
+
829
+ tts_speed_slider = gr.Slider(
830
+ minimum=0.5,
831
+ maximum=2.0,
832
+ value=settings.tts_speed,
833
+ step=0.1,
834
+ label="TTS Speech Speed",
835
+ info="Adjust TTS speech speed (0.5x to 2.0x)",
836
+ )
837
+
838
+ tts_gpu_dropdown = gr.Dropdown(
839
+ choices=["T4", "A10", "A100", "L4", "L40S"],
840
+ value=settings.tts_gpu or "T4",
841
+ label="TTS GPU Type",
842
+ info="Modal GPU type for TTS (T4 is cheapest, A100 is fastest). Note: GPU changes require app restart.",
843
+ visible=settings.modal_available,
844
+ interactive=False, # GPU type set at function definition time, requires restart
845
+ )
846
 
847
  # Hidden text components for model/provider (not dropdowns to avoid value mismatch)
848
  # These will be empty by default and use defaults in configure_orchestrator
 
864
  label="🔊 Audio Response",
865
  visible=settings.enable_audio_output,
866
  )
867
+
868
+ # Update TTS component visibility based on enable_audio_output_checkbox
869
+ # This must be after audio_output is defined
870
+ def update_tts_visibility(enabled: bool) -> tuple[dict[str, Any], dict[str, Any], dict[str, Any]]:
871
+ """Update visibility of TTS components based on enable checkbox."""
872
+ return (
873
+ gr.update(visible=enabled),
874
+ gr.update(visible=enabled),
875
+ gr.update(visible=enabled),
876
+ )
877
+
878
+ enable_audio_output_checkbox.change(
879
+ fn=update_tts_visibility,
880
+ inputs=[enable_audio_output_checkbox],
881
+ outputs=[tts_voice_dropdown, tts_speed_slider, audio_output],
882
+ )
883
 
884
  # Chat interface with multimodal support
885
  # Examples are provided but will NOT run at startup (cache_examples=False)
 
887
  gr.ChatInterface(
888
  fn=research_agent,
889
  multimodal=True, # Enable multimodal input (text + images + audio)
890
+ title="🔬 The DETERMINATOR",
891
  description=(
892
+ "*Deep Research Agent for Medical Inquiry — searches PubMed, "
893
  "ClinicalTrials.gov & Europe PMC*\n\n"
894
  "---\n"
895
+ "*Functions as a medical peer junior researcher. Research tool only — cannot answer medical questions or provide medical advice.* \n"
896
  "**MCP Server Active**: Connect Claude Desktop to `/gradio_api/mcp/`\n\n"
897
  "**🎤 Multimodal Support**: Upload images (OCR), record audio (STT), or type text.\n\n"
898
  "**⚠️ Authentication Required**: Please **sign in with HuggingFace** above before using this application."
 
904
  # Note: Provider is optional - if empty, HF will auto-select
905
  # These examples will NOT run at startup - users must click them after logging in
906
  [
907
+ "What are the latest research findings on Alzheimer's disease treatments?",
908
  "simple",
909
  "Qwen/Qwen3-Next-80B-A3B-Thinking",
910
  "",
src/legacy_orchestrator.py CHANGED
@@ -374,7 +374,7 @@ class Orchestrator:
374
  ]
375
  )
376
 
377
- return f"""## Drug Repurposing Analysis
378
 
379
  ### Question
380
  {query}
 
374
  ]
375
  )
376
 
377
+ return f"""## Research Analysis
378
 
379
  ### Question
380
  {query}
src/mcp_tools.py CHANGED
@@ -1,4 +1,4 @@
1
- """MCP tool wrappers for DeepCritical search tools.
2
 
3
  These functions expose our search tools via MCP protocol.
4
  Each function follows the MCP tool contract:
@@ -24,7 +24,7 @@ async def search_pubmed(query: str, max_results: int = 10) -> str:
24
  Returns titles, authors, abstracts, and citation information.
25
 
26
  Args:
27
- query: Search query (e.g., "metformin alzheimer", "drug repurposing cancer")
28
  max_results: Maximum results to return (1-50, default 10)
29
 
30
  Returns:
@@ -113,7 +113,7 @@ async def search_all_sources(query: str, max_per_source: int = 5) -> str:
113
  """Search all biomedical sources simultaneously.
114
 
115
  Performs parallel search across PubMed, ClinicalTrials.gov, and Europe PMC.
116
- This is the most comprehensive search option for drug repurposing research.
117
 
118
  Args:
119
  query: Search query (e.g., "metformin alzheimer", "aspirin cancer prevention")
@@ -161,10 +161,10 @@ async def analyze_hypothesis(
161
  condition: str,
162
  evidence_summary: str,
163
  ) -> str:
164
- """Perform statistical analysis of drug repurposing hypothesis using Modal.
165
 
166
  Executes AI-generated Python code in a secure Modal sandbox to analyze
167
- the statistical evidence for a drug repurposing hypothesis.
168
 
169
  Args:
170
  drug: The drug being evaluated (e.g., "metformin")
 
1
+ """MCP tool wrappers for The DETERMINATOR search tools.
2
 
3
  These functions expose our search tools via MCP protocol.
4
  Each function follows the MCP tool contract:
 
24
  Returns titles, authors, abstracts, and citation information.
25
 
26
  Args:
27
+ query: Search query (e.g., "metformin alzheimer", "cancer treatment mechanisms")
28
  max_results: Maximum results to return (1-50, default 10)
29
 
30
  Returns:
 
113
  """Search all biomedical sources simultaneously.
114
 
115
  Performs parallel search across PubMed, ClinicalTrials.gov, and Europe PMC.
116
+ This is the most comprehensive search option for deep medical research inquiry.
117
 
118
  Args:
119
  query: Search query (e.g., "metformin alzheimer", "aspirin cancer prevention")
 
161
  condition: str,
162
  evidence_summary: str,
163
  ) -> str:
164
+ """Perform statistical analysis of research hypothesis using Modal.
165
 
166
  Executes AI-generated Python code in a secure Modal sandbox to analyze
167
+ the statistical evidence for a research hypothesis.
168
 
169
  Args:
170
  drug: The drug being evaluated (e.g., "metformin")
src/orchestrator/graph_orchestrator.py CHANGED
@@ -705,8 +705,39 @@ class GraphOrchestrator:
705
  if node.input_transformer:
706
  input_data = node.input_transformer(input_data)
707
 
708
- # Execute agent
709
- result = await node.agent.run(input_data)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
710
 
711
  # Transform output if needed
712
  output = result.output
@@ -855,10 +886,64 @@ class GraphOrchestrator:
855
  Next node ID
856
  """
857
  # Get previous result for decision
858
- prev_result = context.get_node_result(context.current_node)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
859
 
860
  # Make decision
861
- next_node_id = node.decision_function(prev_result)
 
 
 
 
 
 
 
 
 
 
862
 
863
  # Validate decision
864
  if next_node_id not in node.options:
 
705
  if node.input_transformer:
706
  input_data = node.input_transformer(input_data)
707
 
708
+ # Execute agent with error handling
709
+ try:
710
+ result = await node.agent.run(input_data)
711
+ except Exception as e:
712
+ # Handle validation errors and API errors for planner node
713
+ if node.node_id == "planner":
714
+ self.logger.error(
715
+ "Planner agent execution failed, using fallback plan",
716
+ error=str(e),
717
+ error_type=type(e).__name__,
718
+ )
719
+ # Return a minimal fallback ReportPlan
720
+ from src.utils.models import ReportPlan, ReportPlanSection
721
+
722
+ # Extract query from input_data if possible
723
+ fallback_query = query
724
+ if isinstance(input_data, str):
725
+ # Try to extract query from input string
726
+ if "QUERY:" in input_data:
727
+ fallback_query = input_data.split("QUERY:")[-1].strip()
728
+
729
+ return ReportPlan(
730
+ background_context="",
731
+ report_outline=[
732
+ ReportPlanSection(
733
+ title="Research Findings",
734
+ key_question=fallback_query,
735
+ )
736
+ ],
737
+ report_title=f"Research Report: {fallback_query[:50]}",
738
+ )
739
+ # For other nodes, re-raise the exception
740
+ raise
741
 
742
  # Transform output if needed
743
  output = result.output
 
886
  Next node ID
887
  """
888
  # Get previous result for decision
889
+ # The decision node needs the result from the node that connects to it
890
+ # Find the previous node by searching edges
891
+ prev_node_id: str | None = None
892
+ if self._graph:
893
+ # Find which node connects to this decision node
894
+ for from_node, edge_list in self._graph.edges.items():
895
+ for edge in edge_list:
896
+ if edge.to_node == node.node_id:
897
+ prev_node_id = from_node
898
+ break
899
+ if prev_node_id:
900
+ break
901
+
902
+ # Fallback: For continue_decision, it always comes from knowledge_gap
903
+ if not prev_node_id and node.node_id == "continue_decision":
904
+ prev_node_id = "knowledge_gap"
905
+
906
+ # Get result from previous node (or current node if no previous found)
907
+ if prev_node_id:
908
+ prev_result = context.get_node_result(prev_node_id)
909
+ else:
910
+ # Fallback: try to get from visited nodes (last visited before current)
911
+ visited_list = list(context.visited_nodes)
912
+ if len(visited_list) > 0:
913
+ prev_node_id = visited_list[-1]
914
+ prev_result = context.get_node_result(prev_node_id)
915
+ else:
916
+ prev_result = context.get_node_result(context.current_node)
917
+
918
+ # Handle case where result might be a tuple (from pydantic-graph)
919
+ # Extract the actual result object if it's a tuple
920
+ if isinstance(prev_result, tuple) and len(prev_result) > 0:
921
+ # Check if first element is a KnowledgeGapOutput-like object
922
+ if hasattr(prev_result[0], "research_complete"):
923
+ prev_result = prev_result[0]
924
+ elif len(prev_result) > 1 and hasattr(prev_result[1], "research_complete"):
925
+ prev_result = prev_result[1]
926
+ else:
927
+ # If tuple doesn't contain the object, log warning and use first element
928
+ self.logger.warning(
929
+ "Decision node received tuple result, extracting first element",
930
+ node_id=node.node_id,
931
+ tuple_length=len(prev_result),
932
+ )
933
+ prev_result = prev_result[0]
934
 
935
  # Make decision
936
+ try:
937
+ next_node_id = node.decision_function(prev_result)
938
+ except Exception as e:
939
+ self.logger.error(
940
+ "Decision function failed",
941
+ node_id=node.node_id,
942
+ error=str(e),
943
+ prev_result_type=type(prev_result).__name__,
944
+ )
945
+ # Default to first option on error
946
+ next_node_id = node.options[0]
947
 
948
  # Validate decision
949
  if next_node_id not in node.options:
src/orchestrator_magentic.py CHANGED
@@ -122,7 +122,7 @@ class MagenticOrchestrator:
122
 
123
  workflow = self._build_workflow()
124
 
125
- task = f"""Research drug repurposing opportunities for: {query}
126
 
127
  Workflow:
128
  1. SearchAgent: Find evidence from PubMed, ClinicalTrials.gov, and Europe PMC
 
122
 
123
  workflow = self._build_workflow()
124
 
125
+ task = f"""Research opportunities for: {query}
126
 
127
  Workflow:
128
  1. SearchAgent: Find evidence from PubMed, ClinicalTrials.gov, and Europe PMC
src/prompts/hypothesis.py CHANGED
@@ -8,9 +8,11 @@ if TYPE_CHECKING:
8
  from src.services.embeddings import EmbeddingService
9
  from src.utils.models import Evidence
10
 
11
- SYSTEM_PROMPT = """You are a biomedical research scientist specializing in drug repurposing.
12
 
13
- Your role is to generate mechanistic hypotheses based on evidence.
 
 
14
 
15
  A good hypothesis:
16
  1. Proposes a MECHANISM: Drug -> Target -> Pathway -> Effect
 
8
  from src.services.embeddings import EmbeddingService
9
  from src.utils.models import Evidence
10
 
11
+ SYSTEM_PROMPT = """You are a bioinformatics research scientist functioning as a medical peer junior researcher.
12
 
13
+ Your role is to generate mechanistic hypotheses and research questions based on evidence.
14
+
15
+ IMPORTANT: You are a research assistant. You cannot answer medical questions or provide medical advice. Your hypotheses are for research investigation purposes only.
16
 
17
  A good hypothesis:
18
  1. Proposes a MECHANISM: Drug -> Target -> Pathway -> Effect
src/prompts/judge.py CHANGED
@@ -2,10 +2,11 @@
2
 
3
  from src.utils.models import Evidence
4
 
5
- SYSTEM_PROMPT = """You are an expert drug repurposing research judge.
6
 
7
- Your task is to evaluate evidence from biomedical literature and determine if it's sufficient to
8
- recommend drug candidates for a given condition.
 
9
 
10
  ## Evaluation Criteria
11
 
@@ -70,7 +71,7 @@ def format_user_prompt(question: str, evidence: list[Evidence]) -> str:
70
 
71
  ## Your Task
72
 
73
- Evaluate this evidence and determine if it's sufficient to recommend drug repurposing candidates.
74
  Respond with a JSON object matching the JudgeAssessment schema.
75
  """
76
 
 
2
 
3
  from src.utils.models import Evidence
4
 
5
+ SYSTEM_PROMPT = """You are a medical research evidence evaluator functioning as a peer junior researcher.
6
 
7
+ Your task is to evaluate evidence from biomedical literature and determine if sufficient evidence has been gathered to synthesize findings for a given research question.
8
+
9
+ IMPORTANT: You are a research assistant. You cannot answer medical questions or provide medical advice. Your role is to assess whether enough evidence has been collected to support research conclusions.
10
 
11
  ## Evaluation Criteria
12
 
 
71
 
72
  ## Your Task
73
 
74
+ Evaluate this evidence and determine if it's sufficient to synthesize research findings. Consider the quality, quantity, and relevance of the evidence collected.
75
  Respond with a JSON object matching the JudgeAssessment schema.
76
  """
77
 
src/prompts/report.py CHANGED
@@ -8,9 +8,11 @@ if TYPE_CHECKING:
8
  from src.services.embeddings import EmbeddingService
9
  from src.utils.models import Evidence, MechanismHypothesis
10
 
11
- SYSTEM_PROMPT = """You are a scientific writer specializing in drug repurposing research reports.
12
 
13
- Your role is to synthesize evidence and hypotheses into a clear, structured report.
 
 
14
 
15
  A good report:
16
  1. Has a clear EXECUTIVE SUMMARY (one paragraph, key takeaways)
 
8
  from src.services.embeddings import EmbeddingService
9
  from src.utils.models import Evidence, MechanismHypothesis
10
 
11
+ SYSTEM_PROMPT = """You are a scientific writer functioning as a medical peer junior researcher, specializing in research report synthesis.
12
 
13
+ Your role is to synthesize evidence and findings into a clear, structured research report.
14
+
15
+ IMPORTANT: You are a research assistant. You cannot answer medical questions or provide medical advice. Your reports synthesize evidence for research purposes only.
16
 
17
  A good report:
18
  1. Has a clear EXECUTIVE SUMMARY (one paragraph, key takeaways)
src/services/__init__.py CHANGED
@@ -1 +1 @@
1
- """Services for DeepCritical."""
 
1
+ """Services for The DETERMINATOR."""
src/services/stt_gradio.py CHANGED
@@ -23,13 +23,13 @@ class STTService:
23
  """Initialize STT service.
24
 
25
  Args:
26
- api_url: Gradio Space URL (default: settings.stt_api_url)
27
  hf_token: HuggingFace token for authenticated Spaces (default: None)
28
 
29
  Raises:
30
  ConfigurationError: If API URL not configured
31
  """
32
- self.api_url = api_url or settings.stt_api_url
33
  if not self.api_url:
34
  raise ConfigurationError("STT API URL not configured")
35
  self.hf_token = hf_token
 
23
  """Initialize STT service.
24
 
25
  Args:
26
+ api_url: Gradio Space URL (default: settings.stt_api_url or nvidia/canary-1b-v2)
27
  hf_token: HuggingFace token for authenticated Spaces (default: None)
28
 
29
  Raises:
30
  ConfigurationError: If API URL not configured
31
  """
32
+ self.api_url = api_url or settings.stt_api_url or "https://nvidia-canary-1b-v2.hf.space"
33
  if not self.api_url:
34
  raise ConfigurationError("STT API URL not configured")
35
  self.hf_token = hf_token
src/tools/clinicaltrials.py CHANGED
@@ -75,7 +75,7 @@ class ClinicalTrialsTool:
75
  requests.get,
76
  self.BASE_URL,
77
  params=params,
78
- headers={"User-Agent": "DeepCritical-Research-Agent/1.0"},
79
  timeout=30,
80
  )
81
  response.raise_for_status()
 
75
  requests.get,
76
  self.BASE_URL,
77
  params=params,
78
+ headers={"User-Agent": "DETERMINATOR-Research-Agent/1.0"},
79
  timeout=30,
80
  )
81
  response.raise_for_status()
src/tools/search_handler.py CHANGED
@@ -108,6 +108,17 @@ class SearchHandler:
108
  sources_searched: list[SourceName] = []
109
  errors: list[str] = []
110
 
 
 
 
 
 
 
 
 
 
 
 
111
  for tool, result in zip(self.tools, results, strict=True):
112
  if isinstance(result, Exception):
113
  errors.append(f"{tool.name}: {result!s}")
@@ -117,8 +128,14 @@ class SearchHandler:
117
  success_result = cast(list[Evidence], result)
118
  all_evidence.extend(success_result)
119
 
120
- # Cast tool.name to SourceName (centralized type from models)
121
- tool_name = cast(SourceName, tool.name)
 
 
 
 
 
 
122
  sources_searched.append(tool_name)
123
  logger.info("Search tool succeeded", tool=tool.name, count=len(success_result))
124
 
 
108
  sources_searched: list[SourceName] = []
109
  errors: list[str] = []
110
 
111
+ # Map tool names to SourceName values
112
+ # Some tools have internal names that differ from SourceName literals
113
+ tool_name_to_source: dict[str, SourceName] = {
114
+ "duckduckgo": "web",
115
+ "pubmed": "pubmed",
116
+ "clinicaltrials": "clinicaltrials",
117
+ "europepmc": "europepmc",
118
+ "rag": "rag",
119
+ "web": "web", # In case tool already uses "web"
120
+ }
121
+
122
  for tool, result in zip(self.tools, results, strict=True):
123
  if isinstance(result, Exception):
124
  errors.append(f"{tool.name}: {result!s}")
 
128
  success_result = cast(list[Evidence], result)
129
  all_evidence.extend(success_result)
130
 
131
+ # Map tool.name to SourceName (handle tool names that don't match SourceName literals)
132
+ tool_name = tool_name_to_source.get(tool.name, cast(SourceName, tool.name))
133
+ if tool_name not in ["pubmed", "clinicaltrials", "biorxiv", "europepmc", "preprint", "rag", "web"]:
134
+ logger.warning(
135
+ "Tool name not in SourceName literals, defaulting to 'web'",
136
+ tool_name=tool.name,
137
+ )
138
+ tool_name = "web"
139
  sources_searched.append(tool_name)
140
  logger.info("Search tool succeeded", tool=tool.name, count=len(success_result))
141
 
src/tools/web_search.py CHANGED
@@ -3,7 +3,11 @@
3
  import asyncio
4
 
5
  import structlog
6
- from duckduckgo_search import DDGS
 
 
 
 
7
 
8
  from src.tools.query_utils import preprocess_query
9
  from src.utils.exceptions import SearchError
 
3
  import asyncio
4
 
5
  import structlog
6
+ try:
7
+ from ddgs import DDGS # New package name
8
+ except ImportError:
9
+ # Fallback to old package name for backward compatibility
10
+ from duckduckgo_search import DDGS # type: ignore[no-redef]
11
 
12
  from src.tools.query_utils import preprocess_query
13
  from src.utils.exceptions import SearchError
src/utils/config.py CHANGED
@@ -164,6 +164,20 @@ class Settings(BaseSettings):
164
  description="Modal GPU type for TTS (T4, A10, A100, L4, L40S). None uses default T4.",
165
  )
166
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
167
  # Report File Output Configuration
168
  save_reports_to_file: bool = Field(
169
  default=True,
 
164
  description="Modal GPU type for TTS (T4, A10, A100, L4, L40S). None uses default T4.",
165
  )
166
 
167
+ # STT (Speech-to-Text) Configuration
168
+ stt_api_url: str | None = Field(
169
+ default="https://nvidia-canary-1b-v2.hf.space",
170
+ description="Gradio Space URL for STT service (default: nvidia/canary-1b-v2)",
171
+ )
172
+ stt_source_lang: str = Field(
173
+ default="English",
174
+ description="Source language for STT (full name like 'English', 'Spanish', etc.)",
175
+ )
176
+ stt_target_lang: str = Field(
177
+ default="English",
178
+ description="Target language for STT (full name like 'English', 'Spanish', etc.)",
179
+ )
180
+
181
  # Report File Output Configuration
182
  save_reports_to_file: bool = Field(
183
  default=True,
src/utils/exceptions.py CHANGED
@@ -1,8 +1,11 @@
1
- """Custom exceptions for DeepCritical."""
2
 
3
 
4
  class DeepCriticalError(Exception):
5
- """Base exception for all DeepCritical errors."""
 
 
 
6
 
7
  pass
8
 
 
1
+ """Custom exceptions for The DETERMINATOR."""
2
 
3
 
4
  class DeepCriticalError(Exception):
5
+ """Base exception for all DETERMINATOR errors.
6
+
7
+ Note: Class name kept for backward compatibility.
8
+ """
9
 
10
  pass
11
 
tests/conftest.py CHANGED
@@ -43,10 +43,10 @@ def sample_evidence():
43
  relevance=0.85,
44
  ),
45
  Evidence(
46
- content="Drug repurposing offers faster path to treatment...",
47
  citation=Citation(
48
  source="pubmed",
49
- title="Drug Repurposing Strategies",
50
  url="https://example.com/drug-repurposing",
51
  date="Unknown",
52
  authors=[],
 
43
  relevance=0.85,
44
  ),
45
  Evidence(
46
+ content="Research offers faster path to treatment discovery...",
47
  citation=Citation(
48
  source="pubmed",
49
+ title="Research Strategies for Treatment Discovery",
50
  url="https://example.com/drug-repurposing",
51
  date="Unknown",
52
  authors=[],
tests/unit/agents/test_report_agent.py CHANGED
@@ -51,15 +51,15 @@ def sample_hypotheses() -> list[MechanismHypothesis]:
51
  @pytest.fixture
52
  def mock_report() -> ResearchReport:
53
  return ResearchReport(
54
- title="Drug Repurposing Analysis: Metformin for Alzheimer's",
55
  executive_summary=(
56
  "This report analyzes metformin as a potential candidate for "
57
- "repurposing in Alzheimer's disease treatment. It summarizes "
58
  "findings from mechanistic studies showing AMPK activation effects "
59
  "and reviews clinical data. The evidence suggests a potential "
60
  "neuroprotective role, although clinical trials are still limited."
61
  ),
62
- research_question="Can metformin be repurposed for Alzheimer's disease?",
63
  methodology=ReportSection(
64
  title="Methodology", content="Searched PubMed and web sources..."
65
  ),
 
51
  @pytest.fixture
52
  def mock_report() -> ResearchReport:
53
  return ResearchReport(
54
+ title="Research Analysis: Metformin for Alzheimer's",
55
  executive_summary=(
56
  "This report analyzes metformin as a potential candidate for "
57
+ "Alzheimer's disease treatment. It summarizes "
58
  "findings from mechanistic studies showing AMPK activation effects "
59
  "and reviews clinical data. The evidence suggests a potential "
60
  "neuroprotective role, although clinical trials are still limited."
61
  ),
62
+ research_question="What is the evidence for metformin in Alzheimer's disease treatment?",
63
  methodology=ReportSection(
64
  title="Methodology", content="Searched PubMed and web sources..."
65
  ),
tests/unit/tools/test_rag_tool.py CHANGED
@@ -46,11 +46,11 @@ class TestRAGTool:
46
  },
47
  },
48
  {
49
- "text": "Drug repurposing offers faster path to treatment.",
50
  "score": 0.72,
51
  "metadata": {
52
  "source": "pubmed",
53
- "title": "Drug Repurposing Strategies",
54
  "url": "https://example.com/drug-repurposing",
55
  "date": "Unknown",
56
  "authors": "",
 
46
  },
47
  },
48
  {
49
+ "text": "Research offers faster path to treatment discovery.",
50
  "score": 0.72,
51
  "metadata": {
52
  "source": "pubmed",
53
+ "title": "Research Strategies for Treatment Discovery",
54
  "url": "https://example.com/drug-repurposing",
55
  "date": "Unknown",
56
  "authors": "",
uv.lock CHANGED
@@ -595,6 +595,26 @@ wheels = [
595
  { url = "https://files.pythonhosted.org/packages/f5/10/56978295c14794b2c12007b07f3e41ba26acda9257457d7085b0bb3bb90c/brotli-1.2.0-cp314-cp314-win_amd64.whl", hash = "sha256:e7c0af964e0b4e3412a0ebf341ea26ec767fa0b4cf81abb5e897c9338b5ad6a3", size = 375639, upload-time = "2025-11-05T18:38:55.67Z" },
596
  ]
597
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
598
  [[package]]
599
  name = "build"
600
  version = "1.3.0"
@@ -1104,7 +1124,44 @@ wheels = [
1104
  ]
1105
 
1106
  [[package]]
1107
- name = "deepcritical"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1108
  version = "0.1.0"
1109
  source = { editable = "." }
1110
  dependencies = [
@@ -1112,6 +1169,7 @@ dependencies = [
1112
  { name = "anthropic" },
1113
  { name = "beautifulsoup4" },
1114
  { name = "chromadb" },
 
1115
  { name = "duckduckgo-search" },
1116
  { name = "gradio", extra = ["mcp", "oauth"] },
1117
  { name = "gradio-client" },
@@ -1196,6 +1254,7 @@ requires-dist = [
1196
  { name = "chromadb", specifier = ">=0.4.0" },
1197
  { name = "chromadb", marker = "extra == 'embeddings'", specifier = ">=0.4.0" },
1198
  { name = "chromadb", marker = "extra == 'modal'", specifier = ">=0.4.0" },
 
1199
  { name = "duckduckgo-search", specifier = ">=5.0" },
1200
  { name = "gradio", extras = ["mcp", "oauth"], specifier = ">=6.0.0" },
1201
  { name = "gradio-client", specifier = ">=1.0.0" },
@@ -1259,27 +1318,6 @@ dev = [
1259
  { name = "ty", specifier = ">=0.0.1a28" },
1260
  ]
1261
 
1262
- [[package]]
1263
- name = "defusedxml"
1264
- version = "0.7.1"
1265
- source = { registry = "https://pypi.org/simple" }
1266
- sdist = { url = "https://files.pythonhosted.org/packages/0f/d5/c66da9b79e5bdb124974bfe172b4daf3c984ebd9c2a06e2b8a4dc7331c72/defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69", size = 75520, upload-time = "2021-03-08T10:59:26.269Z" }
1267
- wheels = [
1268
- { url = "https://files.pythonhosted.org/packages/07/6c/aa3f2f849e01cb6a001cd8554a88d4c77c5c1a31c95bdf1cf9301e6d9ef4/defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61", size = 25604, upload-time = "2021-03-08T10:59:24.45Z" },
1269
- ]
1270
-
1271
- [[package]]
1272
- name = "deprecated"
1273
- version = "1.2.18"
1274
- source = { registry = "https://pypi.org/simple" }
1275
- dependencies = [
1276
- { name = "wrapt" },
1277
- ]
1278
- sdist = { url = "https://files.pythonhosted.org/packages/98/97/06afe62762c9a8a86af0cfb7bfdab22a43ad17138b07af5b1a58442690a2/deprecated-1.2.18.tar.gz", hash = "sha256:422b6f6d859da6f2ef57857761bfb392480502a64c3028ca9bbe86085d72115d", size = 2928744, upload-time = "2025-01-27T10:46:25.7Z" }
1279
- wheels = [
1280
- { url = "https://files.pythonhosted.org/packages/6e/c6/ac0b6c1e2d138f1002bcf799d330bd6d85084fece321e662a14223794041/Deprecated-1.2.18-py2.py3-none-any.whl", hash = "sha256:bd5011788200372a32418f888e326a09ff80d0214bd961147cfed01b5c018eec", size = 9998, upload-time = "2025-01-27T10:46:09.186Z" },
1281
- ]
1282
-
1283
  [[package]]
1284
  name = "dirtyjson"
1285
  version = "1.0.8"
@@ -1418,6 +1456,15 @@ wheels = [
1418
  { url = "https://files.pythonhosted.org/packages/c1/ea/53f2148663b321f21b5a606bd5f191517cf40b7072c0497d3c92c4a13b1e/executing-2.2.1-py2.py3-none-any.whl", hash = "sha256:760643d3452b4d777d295bb167ccc74c64a81df23fb5e08eff250c425a4b2017", size = 28317, upload-time = "2025-09-01T09:48:08.5Z" },
1419
  ]
1420
 
 
 
 
 
 
 
 
 
 
1421
  [[package]]
1422
  name = "fastapi"
1423
  version = "0.122.0"
@@ -2069,6 +2116,18 @@ wheels = [
2069
  { url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
2070
  ]
2071
 
 
 
 
 
 
 
 
 
 
 
 
 
2072
  [[package]]
2073
  name = "httpx-sse"
2074
  version = "0.4.0"
@@ -5662,6 +5721,15 @@ wheels = [
5662
  { url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235, upload-time = "2024-02-25T23:20:01.196Z" },
5663
  ]
5664
 
 
 
 
 
 
 
 
 
 
5665
  [[package]]
5666
  name = "soundfile"
5667
  version = "0.13.1"
 
595
  { url = "https://files.pythonhosted.org/packages/f5/10/56978295c14794b2c12007b07f3e41ba26acda9257457d7085b0bb3bb90c/brotli-1.2.0-cp314-cp314-win_amd64.whl", hash = "sha256:e7c0af964e0b4e3412a0ebf341ea26ec767fa0b4cf81abb5e897c9338b5ad6a3", size = 375639, upload-time = "2025-11-05T18:38:55.67Z" },
596
  ]
597
 
598
+ [[package]]
599
+ name = "brotlicffi"
600
+ version = "1.2.0.0"
601
+ source = { registry = "https://pypi.org/simple" }
602
+ dependencies = [
603
+ { name = "cffi" },
604
+ ]
605
+ sdist = { url = "https://files.pythonhosted.org/packages/84/85/57c314a6b35336efbbdc13e5fc9ae13f6b60a0647cfa7c1221178ac6d8ae/brotlicffi-1.2.0.0.tar.gz", hash = "sha256:34345d8d1f9d534fcac2249e57a4c3c8801a33c9942ff9f8574f67a175e17adb", size = 476682, upload-time = "2025-11-21T18:17:57.334Z" }
606
+ wheels = [
607
+ { url = "https://files.pythonhosted.org/packages/e4/df/a72b284d8c7bef0ed5756b41c2eb7d0219a1dd6ac6762f1c7bdbc31ef3af/brotlicffi-1.2.0.0-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:9458d08a7ccde8e3c0afedbf2c70a8263227a68dea5ab13590593f4c0a4fd5f4", size = 432340, upload-time = "2025-11-21T18:17:42.277Z" },
608
+ { url = "https://files.pythonhosted.org/packages/74/2b/cc55a2d1d6fb4f5d458fba44a3d3f91fb4320aa14145799fd3a996af0686/brotlicffi-1.2.0.0-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:84e3d0020cf1bd8b8131f4a07819edee9f283721566fe044a20ec792ca8fd8b7", size = 1534002, upload-time = "2025-11-21T18:17:43.746Z" },
609
+ { url = "https://files.pythonhosted.org/packages/e4/9c/d51486bf366fc7d6735f0e46b5b96ca58dc005b250263525a1eea3cd5d21/brotlicffi-1.2.0.0-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:33cfb408d0cff64cd50bef268c0fed397c46fbb53944aa37264148614a62e990", size = 1536547, upload-time = "2025-11-21T18:17:45.729Z" },
610
+ { url = "https://files.pythonhosted.org/packages/1b/37/293a9a0a7caf17e6e657668bebb92dfe730305999fe8c0e2703b8888789c/brotlicffi-1.2.0.0-cp38-abi3-win32.whl", hash = "sha256:23e5c912fdc6fd37143203820230374d24babd078fc054e18070a647118158f6", size = 343085, upload-time = "2025-11-21T18:17:48.887Z" },
611
+ { url = "https://files.pythonhosted.org/packages/07/6b/6e92009df3b8b7272f85a0992b306b61c34b7ea1c4776643746e61c380ac/brotlicffi-1.2.0.0-cp38-abi3-win_amd64.whl", hash = "sha256:f139a7cdfe4ae7859513067b736eb44d19fae1186f9e99370092f6915216451b", size = 378586, upload-time = "2025-11-21T18:17:50.531Z" },
612
+ { url = "https://files.pythonhosted.org/packages/a4/ec/52488a0563f1663e2ccc75834b470650f4b8bcdea3132aef3bf67219c661/brotlicffi-1.2.0.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:fa102a60e50ddbd08de86a63431a722ea216d9bc903b000bf544149cc9b823dc", size = 402002, upload-time = "2025-11-21T18:17:51.76Z" },
613
+ { url = "https://files.pythonhosted.org/packages/e4/63/d4aea4835fd97da1401d798d9b8ba77227974de565faea402f520b37b10f/brotlicffi-1.2.0.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7d3c4332fc808a94e8c1035950a10d04b681b03ab585ce897ae2a360d479037c", size = 406447, upload-time = "2025-11-21T18:17:53.614Z" },
614
+ { url = "https://files.pythonhosted.org/packages/62/4e/5554ecb2615ff035ef8678d4e419549a0f7a28b3f096b272174d656749fb/brotlicffi-1.2.0.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fb4eb5830026b79a93bf503ad32b2c5257315e9ffc49e76b2715cffd07c8e3db", size = 402521, upload-time = "2025-11-21T18:17:54.875Z" },
615
+ { url = "https://files.pythonhosted.org/packages/b5/d3/b07f8f125ac52bbee5dc00ef0d526f820f67321bf4184f915f17f50a4657/brotlicffi-1.2.0.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:3832c66e00d6d82087f20a972b2fc03e21cd99ef22705225a6f8f418a9158ecc", size = 374730, upload-time = "2025-11-21T18:17:56.334Z" },
616
+ ]
617
+
618
  [[package]]
619
  name = "build"
620
  version = "1.3.0"
 
1124
  ]
1125
 
1126
  [[package]]
1127
+ name = "ddgs"
1128
+ version = "9.9.2"
1129
+ source = { registry = "https://pypi.org/simple" }
1130
+ dependencies = [
1131
+ { name = "click" },
1132
+ { name = "fake-useragent" },
1133
+ { name = "httpx", extra = ["brotli", "http2", "socks"] },
1134
+ { name = "lxml" },
1135
+ { name = "primp" },
1136
+ ]
1137
+ sdist = { url = "https://files.pythonhosted.org/packages/30/dc/9f83a14164644d3f666b302b25f07909a7ee1307cbd112b147d6ff61b25b/ddgs-9.9.2.tar.gz", hash = "sha256:5b15d2658c68a6ac10ba76d1b870dc413cf6461d3363aa13830eceee900782ba", size = 36017, upload-time = "2025-11-29T13:45:35.644Z" }
1138
+ wheels = [
1139
+ { url = "https://files.pythonhosted.org/packages/e1/0d/708e8cff994138f7e5e901bcaf7ba4063b833ebd4d3b712858434abceb27/ddgs-9.9.2-py3-none-any.whl", hash = "sha256:5fd2bb828a6e3a90bd886109bfdca1b2e62d7932f617e45cda6a5864fcdfcb04", size = 41555, upload-time = "2025-11-29T13:45:34.741Z" },
1140
+ ]
1141
+
1142
+ [[package]]
1143
+ name = "defusedxml"
1144
+ version = "0.7.1"
1145
+ source = { registry = "https://pypi.org/simple" }
1146
+ sdist = { url = "https://files.pythonhosted.org/packages/0f/d5/c66da9b79e5bdb124974bfe172b4daf3c984ebd9c2a06e2b8a4dc7331c72/defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69", size = 75520, upload-time = "2021-03-08T10:59:26.269Z" }
1147
+ wheels = [
1148
+ { url = "https://files.pythonhosted.org/packages/07/6c/aa3f2f849e01cb6a001cd8554a88d4c77c5c1a31c95bdf1cf9301e6d9ef4/defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61", size = 25604, upload-time = "2021-03-08T10:59:24.45Z" },
1149
+ ]
1150
+
1151
+ [[package]]
1152
+ name = "deprecated"
1153
+ version = "1.2.18"
1154
+ source = { registry = "https://pypi.org/simple" }
1155
+ dependencies = [
1156
+ { name = "wrapt" },
1157
+ ]
1158
+ sdist = { url = "https://files.pythonhosted.org/packages/98/97/06afe62762c9a8a86af0cfb7bfdab22a43ad17138b07af5b1a58442690a2/deprecated-1.2.18.tar.gz", hash = "sha256:422b6f6d859da6f2ef57857761bfb392480502a64c3028ca9bbe86085d72115d", size = 2928744, upload-time = "2025-01-27T10:46:25.7Z" }
1159
+ wheels = [
1160
+ { url = "https://files.pythonhosted.org/packages/6e/c6/ac0b6c1e2d138f1002bcf799d330bd6d85084fece321e662a14223794041/Deprecated-1.2.18-py2.py3-none-any.whl", hash = "sha256:bd5011788200372a32418f888e326a09ff80d0214bd961147cfed01b5c018eec", size = 9998, upload-time = "2025-01-27T10:46:09.186Z" },
1161
+ ]
1162
+
1163
+ [[package]]
1164
+ name = "determinator"
1165
  version = "0.1.0"
1166
  source = { editable = "." }
1167
  dependencies = [
 
1169
  { name = "anthropic" },
1170
  { name = "beautifulsoup4" },
1171
  { name = "chromadb" },
1172
+ { name = "ddgs" },
1173
  { name = "duckduckgo-search" },
1174
  { name = "gradio", extra = ["mcp", "oauth"] },
1175
  { name = "gradio-client" },
 
1254
  { name = "chromadb", specifier = ">=0.4.0" },
1255
  { name = "chromadb", marker = "extra == 'embeddings'", specifier = ">=0.4.0" },
1256
  { name = "chromadb", marker = "extra == 'modal'", specifier = ">=0.4.0" },
1257
+ { name = "ddgs", specifier = ">=9.9.2" },
1258
  { name = "duckduckgo-search", specifier = ">=5.0" },
1259
  { name = "gradio", extras = ["mcp", "oauth"], specifier = ">=6.0.0" },
1260
  { name = "gradio-client", specifier = ">=1.0.0" },
 
1318
  { name = "ty", specifier = ">=0.0.1a28" },
1319
  ]
1320
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1321
  [[package]]
1322
  name = "dirtyjson"
1323
  version = "1.0.8"
 
1456
  { url = "https://files.pythonhosted.org/packages/c1/ea/53f2148663b321f21b5a606bd5f191517cf40b7072c0497d3c92c4a13b1e/executing-2.2.1-py2.py3-none-any.whl", hash = "sha256:760643d3452b4d777d295bb167ccc74c64a81df23fb5e08eff250c425a4b2017", size = 28317, upload-time = "2025-09-01T09:48:08.5Z" },
1457
  ]
1458
 
1459
+ [[package]]
1460
+ name = "fake-useragent"
1461
+ version = "2.2.0"
1462
+ source = { registry = "https://pypi.org/simple" }
1463
+ sdist = { url = "https://files.pythonhosted.org/packages/41/43/948d10bf42735709edb5ae51e23297d034086f17fc7279fef385a7acb473/fake_useragent-2.2.0.tar.gz", hash = "sha256:4e6ab6571e40cc086d788523cf9e018f618d07f9050f822ff409a4dfe17c16b2", size = 158898, upload-time = "2025-04-14T15:32:19.238Z" }
1464
+ wheels = [
1465
+ { url = "https://files.pythonhosted.org/packages/51/37/b3ea9cd5558ff4cb51957caca2193981c6b0ff30bd0d2630ac62505d99d0/fake_useragent-2.2.0-py3-none-any.whl", hash = "sha256:67f35ca4d847b0d298187443aaf020413746e56acd985a611908c73dba2daa24", size = 161695, upload-time = "2025-04-14T15:32:17.732Z" },
1466
+ ]
1467
+
1468
  [[package]]
1469
  name = "fastapi"
1470
  version = "0.122.0"
 
2116
  { url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
2117
  ]
2118
 
2119
+ [package.optional-dependencies]
2120
+ brotli = [
2121
+ { name = "brotli", marker = "platform_python_implementation == 'CPython'" },
2122
+ { name = "brotlicffi", marker = "platform_python_implementation != 'CPython'" },
2123
+ ]
2124
+ http2 = [
2125
+ { name = "h2" },
2126
+ ]
2127
+ socks = [
2128
+ { name = "socksio" },
2129
+ ]
2130
+
2131
  [[package]]
2132
  name = "httpx-sse"
2133
  version = "0.4.0"
 
5721
  { url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235, upload-time = "2024-02-25T23:20:01.196Z" },
5722
  ]
5723
 
5724
+ [[package]]
5725
+ name = "socksio"
5726
+ version = "1.0.0"
5727
+ source = { registry = "https://pypi.org/simple" }
5728
+ sdist = { url = "https://files.pythonhosted.org/packages/f8/5c/48a7d9495be3d1c651198fd99dbb6ce190e2274d0f28b9051307bdec6b85/socksio-1.0.0.tar.gz", hash = "sha256:f88beb3da5b5c38b9890469de67d0cb0f9d494b78b106ca1845f96c10b91c4ac", size = 19055, upload-time = "2020-04-17T15:50:34.664Z" }
5729
+ wheels = [
5730
+ { url = "https://files.pythonhosted.org/packages/37/c3/6eeb6034408dac0fa653d126c9204ade96b819c936e136c5e8a6897eee9c/socksio-1.0.0-py3-none-any.whl", hash = "sha256:95dc1f15f9b34e8d7b16f06d74b8ccf48f609af32ab33c608d08761c5dcbb1f3", size = 12763, upload-time = "2020-04-17T15:50:31.878Z" },
5731
+ ]
5732
+
5733
  [[package]]
5734
  name = "soundfile"
5735
  version = "0.13.1"