import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("miguelamendez/openlipsync", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
OpenLipSync
This is a small repository containing all the required files to run inference for LatentSync1.5.
Installation
- Clone the repo
- On Debian-based systems, run
bash debian_setup.shfor both local and modal (remote inference) - For remote inference with Modal, you must first create the volume by running:
uv run modal run scripts/modal_download_extras.py
uv run modal run scripts/modal_download_models.py
Running Inference
Local Inference
Modify the inference.py file at the root of the directory (add the path of your video and file). Then run with:
uv run inference.py
Remote Inference
Modify the modal_lipsync_inference.py file at the root of the directory (add the path of your video and file). Then run with:
uv run modal run modal_lipsync_inference.py
Remote Inference with FastAPI Endpoints
Run:
uv run modal run modal_lipsync_serve.py
TODO:
- Add MuseTalk checkpoints
- Add LatentSync16 checkpoints
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support