Image-to-Video
Diffusers
Safetensors
lora
video-generation
pixel-art
sprite-animation
game-development
wan
comfyui
Instructions to use styly-agents/Wan2-2-pixel-animate with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use styly-agents/Wan2-2-pixel-animate with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.2-I2V-14B-480P", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("styly-agents/Wan2-2-pixel-animate") prompt = "A man with short gray hair plays a red electric guitar." input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png") image = pipe(image=input_image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Running using code?
#3 opened 29 days ago
by
andysg2211
Do you have any plans to open-source the training dataset?
#2 opened 3 months ago
by
hongweiyi
Documentation missing mysterious loras
1
#1 opened 4 months ago
by
positiveelevation