HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models

1Westlake University   2Zhejiang University   3HKUST(GZ)   4Nanjing University 5Westlake Robotics
Corresponding Author

Abstract

Vision-Language-Action (VLA) models have recently enabled robotic manipulation by grounding visual and linguistic cues into actions. However, most VLAs assume the Markov property, relying only on the current observation and thus suffering from temporal myopia that degrades long-horizon coherence. In this work, we view motion as a more compact and informative representation of temporal context and world dynamics, capturing inter-state changes while filtering static pixel-level noise. Building on this idea, we propose HiF-VLA (Hindsight, Insight, and Foresight for VLAs), a unified framework that leverages motion for bidirectional temporal reasoning. HiF-VLA encodes past dynamics through hindsight priors, anticipates future motion via foresight reasoning, and integrates both through a hindsight-modulated joint expert to enable a "think-while-acting" paradigm for long-horizon manipulation. As a result, HiF-VLA surpasses strong baselines on LIBERO-Long and CALVIN ABC-D benchmarks, while incurring negligible additional inference latency. Furthermore, HiF-VLA achieves substantial improvements in real-world long-horizon manipulation tasks, demonstrating its broad effectiveness in practical robotic settings.

Motivation & Insight

Models.

⚠️ The Challenge: Temporal Myopia

Most existing VLAs implicitly assume a Markov property, predicting actions solely from current observations. This leads to temporal myopia. Common solutions like frame stacking are computationally expensive and introduce massive pixel-level redundancy, obscuring key dynamics.

💡 Our Insight

We argue that motion—rather than raw pixels—is the most precise and compact proxy for history. It captures critical dynamic interactions while explicitly filtering out static visual redundancy.

Furthermore, robust decision-making demands bidirectional temporal reasoning. Motion acts as the natural bridge unifying the past (Hindsight) and the future (Foresight).

🚀 The Solution and Perfomance

  • To bridge this gap, we introduce HiF-VLA, a unified framework utilizing motion-centric bidirectional spatio-temporal reasoning. By concurrently predicting motion and action while maintaining temporal consistency, it enables a robust "Think-While-Acting" paradigm.
  • HiF-VLA demonstrates superior efficiency and scalability. It reduces inference latency by 58.3% compared to frame-stacking methods while achieving state-of-the-art performance on LIBERO-Long (96.4%), Calvin, and complex real-world manipulation tasks.
  • Method

    Models.

    The HiF-VLA Framework. Our approach unifies perception, reasoning, and action through three key stages: (a) Hindsight Prior Acquisition, (b) Foresight Reasoning with Insight, and (c) Hindsight-Modulated Joint Expert.

  • Hindsight Prior Acquisition : Instead of stacking raw image frames, we encode historical context into structured, low-dimensional Motion Vectors (MVs). This representation efficiently serves as the "Hindsight," preserving essential dynamics while discarding pixel-level redundancy.
  • Foresight Reasoning with Insight: Leveraging the reasoning capabilities of VLMs, the model interprets task instructions and the current observation ("Insight"). It anticipates plausible Foresight Motions and generates latent action tokens.
  • Hindsight-Modulated Joint Expert: We introduce a Joint Expert where the Hindsight (past motion) acts as a constraint to modulate the Foresight and Action streams. This modulation ensures that generated actions are causally consistent and temporally coherent.
  • Experiments


    Evaluation on LIBERO Benchmark

    Comparisons with state-of-the-art methods on LIBERO benchmark.
    Comparisons with state-of-the-art methods on the LIBERO benchmark.

    HiF-VLA establishes a new state-of-the-art on the LIBERO benchmark, demonstrating significant gains especially in complex, long-horizon manipulation tasks.

    Evaluation on CALVIN-ABC Benchmark

    Comparisons with state-of-the-art methods on CALVIN-ABC benchmark.
    Comparisons with state-of-the-art methods on CALVIN-ABC benchmark.

    On the Calvin benchmark, our method exhibits superior performance, outperforming existing approaches in average sequence length across both third-view and multi-view settings.

    Evaluation on Real World

    We conduct experiments on three real-world long-horizon tasks, shown in the videos below at the original speed of robot actions.
    Place blocks on the plates.
    Cover block and stack bowls.
    Press buttons in order.

    BibTeX

    
          @misc{lin2025hifvlahindsightinsightforesight,
          title={HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models}, 
          author={Minghui Lin and Pengxiang Ding and Shu Wang and Zifeng Zhuang and Yang Liu and Xinyang Tong and Wenxuan Song and Shangke Lyu and Siteng Huang and Donglin Wang},
          year={2025},
          eprint={2512.09928},
          archivePrefix={arXiv},
          primaryClass={cs.RO},
          url={https://arxiv.org/abs/2512.09928}, 
    }