Submission history From: Youngseo Kim [view email] [v1] Tue, 25 Nov 2025 05:21:23 UTC (773 KB)

Full-text links: Access Paper: View PDFHTML (experimental)TeX Source view license

BibTeX formatted citation × loading… Data provided by:

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Title:Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos

Current browse context: cs.CV < prev   |   next > new | recent | 2025-11 Change to browse by: cs

BibTeX formatted citation × loading… Data provided by:

Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?)

Connected Papers Toggle Connected Papers (What is Connected Papers?)

scite.ai Toggle scite Smart Citations (What are Smart Citations?)

Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?)

Huggingface Toggle Hugging Face (What is Huggingface?)

Links to Code Toggle Papers with Code (What is Papers with Code?)

ScienceCast Toggle ScienceCast (What is ScienceCast?)

Replicate Toggle Replicate (What is Replicate?)

Spaces Toggle Hugging Face Spaces (What is Spaces?)

Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?)

Link to Influence Flower Influence Flower (What are Influence Flowers?)

Core recommender toggle CORE Recommender (What is CORE?)

[Submitted on 25 Nov 2025] Title:Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos Authors:Youngseo Kim, Dohyun Kim, Geohee Han, Paul Hongsuck Seo View PDF HTML (experimental) Abstract:Image diffusion models, though originally developed for image generation, implicitly capture rich semantic structures that enable various recognition and localization tasks beyond synthesis. In this work, we investigate their self-attention maps can be reinterpreted as semantic label propagation kernels, providing robust pixel-level correspondences between relevant image regions. Extending this mechanism across frames yields a temporal propagation kernel that enables zero-shot object tracking via segmentation in videos. We further demonstrate the effectiveness of test-time optimization strategies-DDIM inversion, textual inversion, and adaptive head weighting-in adapting diffusion features for robust and consistent label propagation. Building on these findings, we introduce DRIFT, a framework for object tracking in videos leveraging a pretrained image diffusion model with SAM-guided mask refinement, achieving state-of-the-art zero-shot performance on standard video object segmentation benchmarks. Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2511.19936 [cs.CV]   (or arXiv:2511.19936v1 [cs.CV] for this version)   https://doi.org/10.48550/arXiv.2511.19936 Focus to learn more arXiv-issued DOI via DataCite (pending registration)

[Submitted on 25 Nov 2025] Title:Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos Authors:Youngseo Kim, Dohyun Kim, Geohee Han, Paul Hongsuck Seo View PDF HTML (experimental) Abstract:Image diffusion models, though originally developed for image generation, implicitly capture rich semantic structures that enable various recognition and localization tasks beyond synthesis. In this work, we investigate their self-attention maps can be reinterpreted as semantic label propagation kernels, providing robust pixel-level correspondences between relevant image regions. Extending this mechanism across frames yields a temporal propagation kernel that enables zero-shot object tracking via segmentation in videos. We further demonstrate the effectiveness of test-time optimization strategies-DDIM inversion, textual inversion, and adaptive head weighting-in adapting diffusion features for robust and consistent label propagation. Building on these findings, we introduce DRIFT, a framework for object tracking in videos leveraging a pretrained image diffusion model with SAM-guided mask refinement, achieving state-of-the-art zero-shot performance on standard video object segmentation benchmarks. Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2511.19936 [cs.CV]   (or arXiv:2511.19936v1 [cs.CV] for this version)   https://doi.org/10.48550/arXiv.2511.19936 Focus to learn more arXiv-issued DOI via DataCite (pending registration)

Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2511.19936 [cs.CV]   (or arXiv:2511.19936v1 [cs.CV] for this version)   https://doi.org/10.48550/arXiv.2511.19936 Focus to learn more arXiv-issued DOI via DataCite (pending registration)

Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?)

Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?)

alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) Links to Code Toggle Papers with Code (What is Papers with Code?) ScienceCast Toggle ScienceCast (What is ScienceCast?)

Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?)

This is the xdefiance Online Web Shop.

A True Shop for You and Your Higher, Enlightnened Self…

Welcome to the xdefiance website, which is my cozy corner of the internet that is dedicated to all things homemade and found delightful to share with many others online and offline.

You can book with Jeffrey, who is the Founder of the xdefiance store, by following this link found here.

Visit the paid digital downloads products page to see what is all available for immediate purchase & download to your computer or cellphone by clicking this link here.

Find out more by reading the FAQ Page for any questions that you may have surrounding the website and online sop and get answers to common questions. Read the Returns & Exchanges Policy if you need to make a return on a recent order. You can check out the updated Privacy Policy for xdefiance.com here,

If you have any unanswered questions, please do not hesitate to contact a staff member during office business hours:

Monday-Friday 9am-5pm, Saturday 10am-5pm, Sun. Closed

You can reach someone from xdefiance.online directly at 1(419)-318-9089 via phone or text.

If you have a question, send an email to contact@xdefiance.com for a reply & response that will be given usually within 72 hours of receiving your message.

Browse the shop selection of products now!

Reaching Outwards