p-image-edit-loras
Collection
Train LoRAs at https://replicate.com/prunaai/p-image-edit-trainer. Use LoRAs here: https://replicate.com/prunaai/p-image-edit-lora • 8 items • Updated
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("fill-in-base-model", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("PrunaAI/p-image-edit-dotted-illustration-lora")
prompt = "dotted illustration"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(image=input_image, prompt=prompt).images[0]




This is an image-editing LoRA. Provide an input image and an instruction describing the edit. Trigger word: use dotted illustration in your instructions for best results. The examples below show input → output for different instructions.
Include dotted illustration in your edit instruction, for example:
dotted illustration <your edit instruction here>
Prompt: dotted illustration
Prompt: dotted illustration
Prompt: dotted illustration
Prompt: dotted illustration
Prompt: dotted illustration
The gallery was generated with lora_scale=1.0. You can tune this when running inference (e.g. lower for a subtler effect, higher for a stronger style).
import replicate
output = replicate.run(
"prunaai/p-image-edit-lora:17651bd22e8c151cdb13a97b0f8554dce1e7238cd0a18cf90bc237ac5f0bc067",
input={
"images": ["https://example.com/input.png"],
"prompt": "dotted illustration your edit instruction",
"lora_weights": "https://huggingface.co/davidberenstein1957/p-image-edit-dotted-illustration-lora/resolve/main/weights.safetensors",
"lora_scale": 1.0,
"hf_api_token": "your-hf-token",
}
)