Skip to content

Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"

License

Notifications You must be signed in to change notification settings

scofield7419/Video-of-Thought

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

🤔🎞️ Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition

pytorch 1.8.1 pytorch 1.8.1 Build Status pytorch 1.8.1

The implementation of the ICML 2024 paper Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition


🎉 Visit the project page: VoT


Overview

The first video Chain-of-Thought reasoning framework, VoT, which decomposes raw complex problems into a chain of sub-problems, and reasons through multiple steps from low to high levels, enabling not only pixel perceptive recognition but also semantic cognitive understanding of videos.

We also introduce a novel video MLLM, namely MotionEpic, which supports not only video input but also the encoding, understanding and generation of STSGs.


Code

(TBD)


Citation

If you use this work, please kindly cite:

@inproceedings{VoT24Hao,
  author    = {Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Meishan Zhang, Mong-Li Lee, Wynne Hsu},
  title     = {Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition},
  journal   = {Proceedings of the International Conference on Machine Learning (ICML)},
  year      = {2024},
}

License

The code is released under Apache License 2.0 for Noncommercial use only.


Contact

For any questions, feel free to contact Hao Fei.

Releases

No releases published

Packages