MMTwin: Novel Diffusion Models for Multimodal 3D Hand Trajectory Prediction

1Shanghai Jiao Tong University, 2Meta Reality Labs, 3China University of Mining and Technology, 4National University of Defense Technology
architecture-image image.

What about considering the synergy between hand movements and headset camera egomotion within the future interaction process?

MMTwin incorporates RGB images, point clouds, text prompts, and past hand waypoints to coherently predict future camera egomotion and 3D hand trajectories in egocentric views.

Abstract

We present novel diffusion models (MMTwin) for multimodal 3D hand trajectory prediction. MMTwin is designed to absorb multimodal information as input encompassing 2D RGB images, 3D point clouds, past hand waypoints, and text prompt. Besides, two latent diffusion models, the egomotion diffusion and the HTP diffusion as twins, are integrated into MMTwin to predict camera egomotion and future hand trajectories concurrently. We propose a novel hybrid Mamba-Transformer module as the denoising model of the HTP diffusion to better fuse multimodal features. The experimental results on three publicly available datasets and our self-recorded data demonstrate that our proposed MMTwin can predict plausible future 3D hand trajectories compared to the state-of-the-art baselines, and generalizes well to unseen environments.

MMTwin Architecture

architecture-image image.

Our proposed MMTwin (a) extracts features from multimodal data, and (b) decouples predictions of future camera egomotion features and 3D hand trajectories by novel twin diffusion models. The vanilla Mamba (VM) is used for denoising in the egomotion diffusion. We further design a new denoising model in HTP diffusion with (c) a hybrid Mamba-Transformer module (HMTM), encompassing the egomotion-aware Mamba (EAM) blocks and (d) the structure-aware Transformer (SAT).

Visualizations with Point Clouds

Green: Past waypoints, Blue: GT future waypoints, Red: MMTwin predictions

Multifinger Predictions: New Path to Human-Robot Policy Transfer?

MMTwin can well predict 3D movements of multiple fingers, which exhibits the potential for transfer to robotic skills (we are currently working on).

CABH Benchmark

architecture-image image.

We have released our CABH Benchmark to enable fast evaluation of hand trajectory prediction methods and their robot policy transfer potential. Feel free to try it!

Task Description Link (raw) Link (preprocessed) Link (GLIP feats) Link (train/test splits)
1 place the cup on the coaster hand_data_red_cup.tar.gz hand_data_for_pipeline_mask_redcup.tar.gz glip_feats_redcup.tar.gz train_split.txt / test_split.txt
2 put the apple on the plate hand_data_red_apple.tar.gz hand_data_for_pipeline_mask_redapple.tar.gz glip_feats_redapple.tar.gz train_split.txt / test_split.txt
3 place the box on the shelf hand_data_box.tar.gz hand_data_for_pipeline_mask_box.tar.gz glip_feats_box.tar.gz train_split.txt / test_split.txt

Please refer to our repo for instructions on how to use this benchmark.

BibTeX

@misc{ma2024madiff,
      title={Novel Diffusion Models for Multimodal 3D Hand Trajectory Prediction}, 
      author={Junyi Ma and Wentao Bao and Jingyi Xu and Guanzhong Sun and Xieyuanli Chen and Hesheng Wang},
      year={2025},
      eprint={2504.07375},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.07375}, 
}