Complete Guide to Wanvase Video Generation in ComfyUI
Getting Started with ComfyUI Updates
Before installing the Wan Vase model, ensure your ComfyUI is properly updated:
First, go to the manager and click "update all." If this doesn't work completely, navigate to your ComfyUI installation folder, find the update folder, and run the "update ComfyUI BAT" file. After restarting, run "update all" again to ensure all nodes are current.
Essential Node Installation
You'll need the Guff node for all workflows. If you don't have it installed:
- Open the manager
- Go to custom nodes manager
- Search for "Guff"
- Install and restart ComfyUI
For video-to-video workflows, you'll also need the aux node, which you can find and install the same way.
Model Downloads and Placement
Choose the right model size based on your hardware:
- Q3: Smallest version for limited VRAM
- Q4: Balanced option for most users
- Q8: Higher quality with more VRAM requirements
- FP16: Best quality for high-end graphics cards
Place the main model in the diffusion models folder. You'll also need:
- Clip model (FPScaled version) in the text encoders folder
- VAE model in the VAE folder
After downloading all models, go to "edit" and select "refresh node definitions" so ComfyUI can detect them.
.jpg)
Text-to-Video Workflow Setup
The basic workflow includes:
- Positive and negative prompt nodes
- Wanvase to video node
- Case sampler
- Trim video latent node
- Create video and save video nodes
Configure your settings carefully:
- Keep dimensions under 1280 pixels
- Use multiples of 32 for width and height
- Limit videos to 5 seconds maximum
- Calculate frames using: (seconds × 16 fps) + 1
For a 3-second video: (3 × 16) + 1 = 49 frames
Image-to-Video Conversion
Converting the text-to-video workflow is simple:
- Add a load image node
- Connect the image output to the reference image input
- Adjust dimensions to match your image's aspect ratio
- Create prompts that only animate visible elements
This prevents AI from adding non-existent elements like hands that weren't in the original image.

Video-to-Video Motion Control
This workflow uses motion from a reference video:
- Load your control video
- Use preprocessor nodes (canny, depth, or pose)
- Connect to the control video input
- The AI will follow the reference video's motion patterns
Speed Optimization with LoRA
To significantly reduce generation time:
- Install the RG3 Power LoRA loader node
- Download the speed optimization LoRA
- Place it in the LoRAs folder
- Use strength setting of 0.25
Adjust these sampling settings:
- Steps: 4-6 (instead of standard 20)
- CFG: 6
- Scheduler: Euler ancestral with beta
Color Correction for LoRA Results
LoRA can alter colors and contrast. Fix this with:
- Easy use node (install from manager)
- Color match node after VAE decode
- Use original image as reference
- Apply to all generated frames
Quality Comparison and Expectations
While the Wanvase model produces impressive results for a free option, paid services like Kling AI still offer superior quality. However, the gap is narrowing, and the open-source nature allows for customization and local processing.
Performance Tips
Generation times vary significantly:
- RTX 4090: 6-7 minutes for standard settings
- Smaller models generate faster but with reduced quality
- Cloud platforms like RunningHub offer alternative processing options
- HD resolution can take 20+ minutes per video
Level up your team's AI usage—collaborate with Promptus. Be a creator at https://www.promptus.ai
Join our creator newsletter
Stay up-to-date with the creator tips, workflows, models announcements and news.

