Research
My research primarily revolves around video data, which provides a lens to the intricate dynamics and mechanisms that define our world. I am interested in self-supervised methods and vision-language models as these yield more generalizable visual representations.
|
|
SIGMA: Sinkhorn-Guided Masked Video Modeling
Mohammadreza Salehi*, Michael Dorkenwald*, Fida Mohammad Thoker*, Efstratios Gavves, Cees Snoek, Yuki M. Asano
Accepted at ECCV 2024
|
|
PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs
Michael Dorkenwald, Nimrod Barazani, Cees Snoek*, Yuki M. Asano*
CVPR 2024
|
|
SCVRL: Shuffled Contrastive Video Representation Learning
Michael Dorkenwald, Fanyi Xiao, Biagio Brattoli, Joseph Tighe, Davide Modolo
CVPR 2022 I3D-IVU workshop
|
|
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis
Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer
ICCV 2021
|
|
Stochastic Image-to-Video Synthesis using cINNs
Michael Dorkenwald, Timo Milbich, Andreas Blattmann, Robin Rombach, Konstantinos G. Derpanis, Björn Ommer
CVPR 2021
|
|
Behavior-Driven Synthesis of Human Dynamics
Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer
CVPR 2021
|
|
Understanding Object Dynamics for Interactive Image-to-Video Synthesis
Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer
CVPR 2021
|
|
Unsupervised behaviour analysis and magnification (uBAM) using deep learning
Biagio Brattoli*, Uta Buechler*, Michael Dorkenwald, Philipp Reiser, Lineard Filli, Fritjof Helmchen, Anna-Sophia Wahl, Björn Ommer
Nature Machine Intelligence
|
|
|
|