Spaces:
Running
on
Zero
Running
on
Zero
| torch==2.6.0 | |
| torchvision==0.21.0 | |
| torchaudio==2.6.0 | |
| opencv-python==4.11.0.86 | |
| diffusers==0.34.0 | |
| tokenizers==0.21.4 | |
| accelerate==1.10.0 | |
| tqdm==4.67.1 | |
| imageio==2.37.0 | |
| easydict==1.13 | |
| ftfy==6.3.1 | |
| dashscope==1.24.1 | |
| imageio-ffmpeg==0.6.0 | |
| numpy==1.26.4 | |
| lightning==2.5.2 | |
| xfuser==0.4.4 | |
| yunchang==0.6.3.post1 | |
| moviepy==2.1.2 | |
| omegaconf==2.3.0 | |
| decord==0.6.0 | |
| ffmpeg-python==0.2.0 | |
| librosa==0.11.0 | |
| torchaudio==2.6.0 | |
| audio-separator==0.30.2 | |
| onnxruntime-gpu==1.22.0 | |
| gradio>=5.0.0,<5.1.0 | |
| insightface==0.7.3 | |
| transformers==4.52.0 | |
| huggingface_hub | |
| spaces | |
| ninja | |
| # flash_attn ้ข็ผ่ฏ wheel (torch 2.6 + CUDA 12 + Python 3.10) | |
| # ๅ่: https://huggingface.co/spaces/fffiloni/Meigen-MultiTalk/blob/main/requirements.txt | |
| https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl |