Spaces:
Running
Running
DeepCritical Context
Project Overview
DeepCritical is an AI-native Medical Drug Repurposing Research Agent. Goal: To accelerate the discovery of new uses for existing drugs by intelligently searching biomedical literature (PubMed, ClinicalTrials.gov, bioRxiv), evaluating evidence, and hypothesizing potential applications.
Architecture: The project follows a Vertical Slice Architecture (Search -> Judge -> Orchestrator) and adheres to Strict TDD (Test-Driven Development).
Current Status:
- Phases 1-9: COMPLETE. Foundation, Search, Judge, UI, Orchestrator, Embeddings, Hypothesis, Report, Cleanup.
- Phases 10-11: COMPLETE. ClinicalTrials.gov and bioRxiv integration.
- Phase 12: COMPLETE. MCP Server integration (Gradio MCP at
/gradio_api/mcp/). - Phase 13: COMPLETE. Modal sandbox for statistical analysis.
Tech Stack & Tooling
- Language: Python 3.11 (Pinned)
- Package Manager:
uv(Rust-based, extremely fast) - Frameworks:
pydantic,pydantic-ai,httpx,gradio[mcp] - Vector DB:
chromadbwithsentence-transformersfor semantic search - Code Execution:
modalfor secure sandboxed Python execution - Testing:
pytest,pytest-asyncio,respx(for mocking) - Quality:
ruff(linting/formatting),mypy(strict type checking),pre-commit
Building & Running
| Command | Description |
|---|---|
make install |
Install dependencies and pre-commit hooks. |
make test |
Run unit tests. |
make lint |
Run Ruff linter. |
make format |
Run Ruff formatter. |
make typecheck |
Run Mypy static type checker. |
make check |
The Golden Gate: Runs lint, typecheck, and test. Must pass before committing. |
make clean |
Clean up cache and artifacts. |
Directory Structure
src/: Source codeutils/: Shared utilities (config.py,exceptions.py,models.py)tools/: Search tools (pubmed.py,clinicaltrials.py,biorxiv.py,code_execution.py)services/: Services (embeddings.py,statistical_analyzer.py)agents/: Magentic multi-agent mode agentsagent_factory/: Agent definitions (judges, prompts)mcp_tools.py: MCP tool wrappers for Claude Desktop integrationapp.py: Gradio UI with MCP server
tests/: Test suiteunit/: Isolated unit tests (Mocked)integration/: Real API tests (Marked as slow/integration)
docs/: Documentation and Implementation Specsexamples/: Working demos for each phase
Key Components
src/orchestrator.py- Main agent loopsrc/tools/pubmed.py- PubMed E-utilities searchsrc/tools/clinicaltrials.py- ClinicalTrials.gov APIsrc/tools/biorxiv.py- bioRxiv/medRxiv preprint searchsrc/tools/code_execution.py- Modal sandbox executionsrc/services/statistical_analyzer.py- Statistical analysis via Modalsrc/mcp_tools.py- MCP tool wrapperssrc/app.py- Gradio UI (HuggingFace Spaces) with MCP server
Configuration
Settings via pydantic-settings from .env:
LLM_PROVIDER: "openai" or "anthropic"OPENAI_API_KEY/ANTHROPIC_API_KEY: LLM keysNCBI_API_KEY: Optional, for higher PubMed rate limitsMODAL_TOKEN_ID/MODAL_TOKEN_SECRET: For Modal sandbox (optional)MAX_ITERATIONS: 1-50, default 10LOG_LEVEL: DEBUG, INFO, WARNING, ERROR
Development Conventions
- Strict TDD: Write failing tests in
tests/unit/before implementing logic insrc/. - Type Safety: All code must pass
mypy --strict. Use Pydantic models for data exchange. - Linting: Zero tolerance for Ruff errors.
- Mocking: Use
respxorunittest.mockfor all external API calls in unit tests. - Vertical Slices: Implement features end-to-end rather than layer-by-layer.
Git Workflow
main: Production-ready (GitHub)dev: Development integration (GitHub)- Remote
origin: GitHub (source of truth for PRs/code review) - Remote
huggingface-upstream: HuggingFace Spaces (deployment target)
HuggingFace Spaces Collaboration:
- Each contributor should use their own dev branch:
yourname-dev(e.g.,vcms-dev,mario-dev) - DO NOT push directly to
mainordevon HuggingFace - these can be overwritten easily - GitHub is the source of truth; HuggingFace is for deployment/demo
- Consider using git hooks to prevent accidental pushes to protected branches