Turn black and white anime sketches into colored art using smart point matching.
The paper introduces an AI model that accurately colors line art by using reference images and precise point-control, enabling realistic manga/anime colorization.
https://arxiv.org/abs/2501.08332
Original Problem 🎯:
→ Existing line art colorization methods struggle with semantic mismatches between reference images and line art, requiring highly similar references and lacking precise control over color details.
-----
Solution in this Paper 🔧:
→ MangaNinja uses a dual-branch architecture with diffusion models for finding correspondences between reference and line art images.
→ A patch shuffling module divides reference images into small patches to improve local matching capabilities.
→ A point-driven control scheme powered by PointNet enables detailed color control through user-defined points.
→ The model leverages anime video frames for training, using one frame as reference and another's line art version as target.
-----
Key Insights 🔍:
→ Patch shuffling pushes the model to learn implicit matching by breaking global patterns
→ Point control only works effectively when model understands local semantics
→ Video frame pairs provide natural semantic correspondences for training
-----
Results 📊:
→ Outperforms existing methods with DINO score of 69.91 and CLIP score of 90.02
→ Achieves 21.34 PSNR and 0.972 MS-SSIM for image quality metrics
→ Shows superior performance in handling extreme poses, shadows, and multi-character colorization
Share this post