Submission history From: Youngseo Kim [view email] [v1] Tue, 25 Nov 2025 05:21:23 UTC (773 KB)
Full-text links: Access Paper: View PDFHTML (experimental)TeX Source view license
BibTeX formatted citation × loading… Data provided by:
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Title:Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos
Current browse context: cs.CV < prev | next > new | recent | 2025-11 Change to browse by: cs
BibTeX formatted citation × loading… Data provided by:
Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle Connected Papers (What is Connected Papers?)
scite.ai Toggle scite Smart Citations (What are Smart Citations?)
Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?)
Huggingface Toggle Hugging Face (What is Huggingface?)
Links to Code Toggle Papers with Code (What is Papers with Code?)
ScienceCast Toggle ScienceCast (What is ScienceCast?)
Replicate Toggle Replicate (What is Replicate?)
Spaces Toggle Hugging Face Spaces (What is Spaces?)
Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?)
Link to Influence Flower Influence Flower (What are Influence Flowers?)
Core recommender toggle CORE Recommender (What is CORE?)
[Submitted on 25 Nov 2025] Title:Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos Authors:Youngseo Kim, Dohyun Kim, Geohee Han, Paul Hongsuck Seo View PDF HTML (experimental) Abstract:Image diffusion models, though originally developed for image generation, implicitly capture rich semantic structures that enable various recognition and localization tasks beyond synthesis. In this work, we investigate their self-attention maps can be reinterpreted as semantic label propagation kernels, providing robust pixel-level correspondences between relevant image regions. Extending this mechanism across frames yields a temporal propagation kernel that enables zero-shot object tracking via segmentation in videos. We further demonstrate the effectiveness of test-time optimization strategies-DDIM inversion, textual inversion, and adaptive head weighting-in adapting diffusion features for robust and consistent label propagation. Building on these findings, we introduce DRIFT, a framework for object tracking in videos leveraging a pretrained image diffusion model with SAM-guided mask refinement, achieving state-of-the-art zero-shot performance on standard video object segmentation benchmarks. Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2511.19936 [cs.CV] (or arXiv:2511.19936v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2511.19936 Focus to learn more arXiv-issued DOI via DataCite (pending registration) [Submitted on 25 Nov 2025] Title:Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos Authors:Youngseo Kim, Dohyun Kim, Geohee Han, Paul Hongsuck Seo View PDF HTML (experimental) Abstract:Image diffusion models, though originally developed for image generation, implicitly capture rich semantic structures that enable various recognition and localization tasks beyond synthesis. In this work, we investigate their self-attention maps can be reinterpreted as semantic label propagation kernels, providing robust pixel-level correspondences between relevant image regions. Extending this mechanism across frames yields a temporal propagation kernel that enables zero-shot object tracking via segmentation in videos. We further demonstrate the effectiveness of test-time optimization strategies-DDIM inversion, textual inversion, and adaptive head weighting-in adapting diffusion features for robust and consistent label propagation. Building on these findings, we introduce DRIFT, a framework for object tracking in videos leveraging a pretrained image diffusion model with SAM-guided mask refinement, achieving state-of-the-art zero-shot performance on standard video object segmentation benchmarks. Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2511.19936 [cs.CV] (or arXiv:2511.19936v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2511.19936 Focus to learn more arXiv-issued DOI via DataCite (pending registration)Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2511.19936 [cs.CV] (or arXiv:2511.19936v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2511.19936 Focus to learn more arXiv-issued DOI via DataCite (pending registration)
Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?)
Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?)
alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) Links to Code Toggle Papers with Code (What is Papers with Code?) ScienceCast Toggle ScienceCast (What is ScienceCast?)
Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?)



You must be logged in to post a comment.