HoliTomHoliTom: Holistic Token Merging for Fast Video Large Language Models

1Zhejiang University, 2Westlake University, 3Salesforce AI Research, 4Columbia University, 5Rice University
*Corresponding author: wanghuan [at] westlake [dot] edu [dot] cn

Zhejiang University
Westlake University
Salesforce AI Research
Columbia University
Rice University
HoliTom

Left: We introduce HoliTom, a training-free holistic token merge method for fast video LLMs. Its key innovation lies in its global, redundancy-aware outer-LLM spatio-temporal compression and robust, token similarity-based inner-LLM compression. Right: The Efficiency/Performance trade-off curve of multiple training-free methods on four widely used video understanding benchmarks: MVBench, EgoSchema, LongVideoBench, and VideoMME. Our method, HoliTom, surpasses the SoTA approaches by maintaining 99.1% average performance while reducing FLOPs to 6.9%.

Abstract

Video large language models (video LLMs) excel at video comprehension but face significant computational inefficiency due to redundant video tokens. Existing token pruning methods offer solutions. However, approaches operating within the LLM (inner-LLM pruning), such as FastV, incur intrinsic computational overhead in shallow layers. In contrast, methods performing token pruning before the LLM (outer-LLM pruning) primarily address spatial redundancy within individual frames or limited temporal windows, neglecting the crucial global temporal dynamics and correlations across longer video sequences. This leads to sub-optimal spatio-temporal reduction and does not leverage video compressibility fully. Crucially, the synergistic potential and mutual influence of combining these strategies remain unexplored. To further reduce redundancy, we introduce HoliTom, a novel training-free holistic token merging framework. HoliTom employs outer-LLM pruning through global redundancy-aware temporal segmentation, followed by spatial-temporal merging to reduce visual tokens by over 90%, significantly alleviating the LLM's computational burden. Complementing this, we introduce a robust inner-LLM token similarity-based merging approach, designed for superior performance and compatibility with outer-LLM pruning. Evaluations demonstrate our method's promising efficiency-performance trade-off on LLaVA-OneVision-7B, reducing computational costs to 6.9% of FLOPs while maintaining 99.1% of the original performance. Furthermore, we achieve a 2.28× reduction in Time-To-First-Token (TTFT) and a 1.32× acceleration in decoding throughput, highlighting the practical benefits of our integrated pruning approach for efficient video LLMs inference.

Overview of our HoliTom method

HoliTom compresses video LLMs across three scopes; the first two are outer-LLM pruning. Temporal Merging maximizes temporal compression via global redundancy-aware segmentation, merging similar tokens into their first occurrence. Spatial Merging further reduces redundancy by applying tailored spatial compression based on the characteristics of remaining temporal variations. Inner-LLM Merging leverages attention within the LLM to identify key tokens and merges less important, similar tokens, streamlining information within the LLM.

Method Overview

Main Results

More Visualizations

BibTeX

@article{shao2025holitom,
  title={HoliTom: Holistic Token Merging for Fast Video Large Language Models}, 
  author={Kele Shao and Keda Tao and Can Qin and Haoxuan You and Yang Sui and Huan Wang},
  journal={arXiv preprint arXiv:2505.21334},
  year={2025},
}