import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Tianyi1229/MindCine", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]Project Name: MindCine
This repository contains the pre-trained weights and ground truth data required to reproduce the results of our framework. The content is organized into two primary components: the generation backbone and the training data for specific branches.
π File Structure & Description
The repository is structured as follows:
.
βββ Tune-A-Video/ # Pre-trained Weights for the Generation Model
ββββ data/ # Ground Truth Data for Semantic & Perception Branches
βββ LLM_pretrained/ # EEG Foundation Model
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support