Residual Rotation Correction using Tactile Equivariance

1Northeastern University 2Stanford University

†Equal Advising

Real-time tactile-equivariant residual correction for robust manipulation.

Abstract

Visuotactile policy learning augments vision-only policies with tactile input, facilitating contact-rich manipulation. However, the high cost of tactile data collection makes sample efficiency the key requirement for developing visuotactile policies. We present EquiTac, a framework that exploits the inherent SO(2) symmetry of in-hand object rotation to improve sample efficiency and generalization for visuotactile policy learning. EquiTac first reconstructs surface normals from raw RGB inputs of vision-based tactile sensors, so rotations of the normal vector field correspond to in-hand object rotations. An SO(2)-equivariant network then predicts a residual rotation action that augments a base visuomotor policy at test time, enabling real-time rotation correction without additional reorientation demonstrations. On a real robot, EquiTac accurately achieves robust zero-shot generalization to unseen in-hand orientations with very few training samples, where baselines fail even with more training data. To our knowledge, this is the first tactile learning method to explicitly encode tactile equivariance for policy learning, yielding a lightweight, symmetry-aware module that improves reliability in contact-rich tasks.

Method Overview

Method Overview

Tactile-guided Correction

The equivariant network observes rotated normal maps and outputs a corresponding residual for the base policy.

Normal map illustration

Normal Map Reconstruction

We reconstruct surface normal maps from RGB tactile images to disentangle lighting effects from geometry.

Normal map illustration