How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Tianyi1229/MindCine", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Project Name: MindCine

This repository contains the pre-trained weights and ground truth data required to reproduce the results of our framework. The content is organized into two primary components: the generation backbone and the training data for specific branches.

πŸ“‚ File Structure & Description

The repository is structured as follows:

.
β”œβ”€β”€ Tune-A-Video/      # Pre-trained Weights for the Generation Model
β”œβ”€β”€β”€ data/             # Ground Truth Data for Semantic & Perception Branches
└── LLM_pretrained/   # EEG Foundation Model
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support