ViTaS

Visual Tactile Soft Fusion Contrastive Learning for Visuomotor Learning

1Harbin Institute of Technology, 2The University of Hong Kong, 3Tsinghua University, IIIS, 4Shanghai Qi Zhi Institute, 5Carnegie Mellon University
* Equal contribution

ICRA 2026

Real-World Performance: Success • Generalization


Abstract

Tactile information plays a crucial role in human manipulation tasks and has recently garnered increasing attention in robotic manipulation. However, existing approaches mostly focus on the alignment of visual and tactile features and the integration mechanism tends to be direct concatenation. Consequently, they struggle to effectively cope with occluded scenarios due to neglecting the inherent complementary nature of both modalities and the alignment may not be exploited enough, limiting the potential of their real-world deployment.

In this paper, we present ViTaS, a simple yet effective framework that incorporates both visual and tactile information to guide the behavior of an agent. We introduce Soft Fusion Contrastive Learning, an advanced version of conventional contrastive learning method and a CVAE module to utilize the alignment and complementarity within visuo-tactile representations. We demonstrate the effectiveness of our method in 12 simulated and 3 real-world environments, and our experiments show that ViTaS significantly outperforms existing baselines.


Method


ViTaS takes vision and touch as inputs, which are then processed through separate CNN encoders. Encoded embeddings are utilized by soft fusion contrastive approach, yielding fused feature representation for policy network. A CVAE-based reconstruction framework is also applied for cross-modal integration.


Simulation Experiments

Insertion
Door
Lift
Pen Rotate
Dual Arm Lift
Mobile Catch
Egg Rotate
Block Rotate
Block Spin
Insertion Noisy
Lift w/ Cap
Lift w/ Can
Simulation Results

Success rates (%) across 12 simulated tasks


Real-World Experiments

Real-World Setup

Hardware setup for ViTaS

Real-World Setup

ViTaS in imitation learning based on diffusion policy

Dual Arm Clean
Table Pick Place
Fridge Pick Place
Real-World Results

Success rates (%) on 3 real-world tasks (25 trials each)

BibTeX

@article{tian2026vitas,
  author    = {Tian, Yufeng and Cheng, Shuiqi and Wei, Tianming and Zhou, Tianxing and Zhang, Yuanhang and Liu, Zixian and Han, Qianwei and Yuan, Zhecheng and Xu, Huazhe},
  title     = {ViTaS: Visual Tactile Soft Fusion Contrastive Learning for Visuomotor Learning},
  journal   = {ICRA},
  year      = {2026},
}