Spaces:
Sleeping
Sleeping
File size: 3,649 Bytes
ef17ebc 5c19816 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 5c19816 8c44361 ef17ebc 5c19816 8c44361 5c19816 8c44361 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 5c19816 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc 8c44361 5c19816 ef17ebc 8c44361 ef17ebc 8c44361 ef17ebc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
---
title: LLM Data Analyzer
emoji: π
colorFrom: blue
colorTo: indigo
sdk: docker
sdk_version: latest
app_file: app.py
pinned: false
---
# π LLM Data Analyzer
An AI-powered tool for analyzing data and having conversations with an intelligent assistant powered by Llama 2.
## Features
- **π€ Upload & Analyze**: Upload CSV or Excel files and get instant analysis
- **π¬ Chat**: Have conversations with Llama 2 AI assistant
- **π Data Statistics**: View comprehensive data summaries and insights
- **π Fast**: Runs on free Hugging Face CPU tier
## How to Use
1. **Upload Data** - Start by uploading a CSV or Excel file
2. **Preview** - Review your data and statistics
3. **Ask Questions** - Get AI-powered analysis and insights
4. **Chat** - Have follow-up conversations with the AI
## Technology Stack
- **Model**: Llama 2 7B (quantized to 4-bit)
- **Framework**: Streamlit
- **Inference Engine**: Llama.cpp
- **Hosting**: Hugging Face Spaces
- **Language**: Python 3.10+
## Performance
| Metric | Value |
|--------|-------|
| Speed | ~5-10 tokens/second (free CPU) |
| Model Size | 4GB (quantized) |
| Context Window | 2048 tokens |
| First Load | ~30 seconds (model download) |
| Subsequent Responses | ~5-15 seconds |
| Hardware | Free Hugging Face CPU |
## Local Development (Faster)
For faster local development with GPU acceleration on Apple Silicon Mac:
```bash
# Clone the repository
git clone https://github.com/Arif-Badhon/LLM-Data-Analyzer
cd LLM-Data-Analyzer
# Switch to huggingface-deployment branch
git checkout huggingface-deployment
# Install dependencies
pip install -r requirements.txt
# Run with MLX (Apple Silicon GPU - ~70 tokens/second)
streamlit run app.py
```
## Deployment Options
### Option 1: Hugging Face Space (Free)
- CPU-based inference
- Speed: 5-10 tokens/second
- Cost: Free
### Option 2: Local with MLX (Fastest)
- GPU-accelerated on Apple Silicon
- Speed: 70+ tokens/second
- Cost: Free (uses your Mac)
### Option 3: Hugging Face PRO (Fast)
- GPU-accelerated inference
- Speed: 50+ tokens/second
- Cost: $9/month
## Getting Started
### Quick Start (3 minutes)
```bash
# 1. Install Python 3.10+
# 2. Clone repo
git clone https://github.com/Arif-Badhon/LLM-Data-Analyzer
cd LLM-Data-Analyzer
# 3. Install dependencies
pip install -r requirements.txt
# 4. Run Streamlit app
streamlit run app.py
```
### With Docker (Local Development)
```bash
# Make sure Docker Desktop is running
docker-compose up --build
# Access at http://localhost:8501
```
## Troubleshooting
### "Model download failed"
- Check internet connection
- HF Spaces need internet to download models from Hugging Face Hub
- Wait and refresh the page
### "App takes too long to load"
- Normal on first request (10-30 seconds)
- Model is being downloaded and cached
- Subsequent requests are much faster
### "Out of memory"
- Free tier CPU is limited
- Unlikely with quantized 4GB model
- If it happens, upgrade to HF PRO
### "Slow responses"
- Free tier CPU is slower than GPU
- Expected: 5-10 tokens/second
- For faster responses: use local MLX (70 t/s) or upgrade HF tier
## Technologies Used
- **Python** - Core language
- **Streamlit** - Web UI framework
- **Llama 2** - Large language model
- **Llama.cpp** - CPU inference
- **MLX** - Apple Silicon GPU inference
- **Pandas** - Data processing
- **Docker** - Containerization
- **Hugging Face Hub** - Model hosting
## License
MIT License
## Author
**Arif Badhon**
## Support
If you encounter any issues:
1. Check the Troubleshooting section above
2. Review Hugging Face Spaces Docs
3. Open an issue on GitHub
---
**Happy analyzing! π** |