Papers
arxiv:2512.08405

Learning Robot Manipulation from Audio World Models

Published on Dec 9
· Submitted by Fan Zhang on Dec 16
Authors:

Abstract

A generative latent flow matching model is proposed to predict future audio for robotic manipulation tasks, improving performance over methods without future lookahead by accurately capturing intrinsic rhythmic patterns.

AI-generated summary

World models have demonstrated impressive performance on robotic learning tasks. Many such tasks inherently demand multimodal reasoning; for example, filling a bottle with water will lead to visual information alone being ambiguous or incomplete, thereby requiring reasoning over the temporal evolution of audio, accounting for its underlying physical properties and pitch patterns. In this paper, we propose a generative latent flow matching model to anticipate future audio observations, enabling the system to reason about long-term consequences when integrated into a robot policy. We demonstrate the superior capabilities of our system through two manipulation tasks that require perceiving in-the-wild audio or music signals, compared to methods without future lookahead. We further emphasize that successful robot action learning for these tasks relies not merely on multi-modal input, but critically on the accurate prediction of future audio states that embody intrinsic rhythmic patterns.

Community

Paper author Paper submitter
edited about 12 hours ago

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.08405 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.08405 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.08405 in a Space README.md to link it from this page.

Collections including this paper 1