MemoryOut: Learning Principal Features via Multimodal Sparse Filtering Network for Semi-supervised Video Anomaly Detection

1School of Software Engineering, South China University of Technology
2Wechat AI, Tencent
3Institute for Super Robotics (Huangpu)
4School of Computer Science and Engineering, South China University of Technology
*Equal contributions.
Comparison between previous and our proposed framework. (a) Existing memory modules filter anomalous information through inflexible prototype matching. (b) Our proposed sparse feature filtering paradigm achieves better VAD performance compared to memory-based methods. (c) Existing methods measure anomaly scores through reconstruction or prediction errors, which may produce smaller errors when dealing with small objects. (d) Our proposed novel multimodal framework introduces a semantic branch that can capture subtle local anomalies through semantic errors.

Abstract

Video Anomaly Detection (VAD) methods based on reconstruction or prediction face two critical challenges: (1) strong generalization capability often results in accurate reconstruction or prediction of abnormal events, making it difficult to distinguish normal from abnormal patterns; (2) reliance only on low-level appearance and motion cues limits their ability to identify high-level semantic in abnormal events from complex scenes. To address these limitations, we propose a novel VAD framework with two key innovations. First, to suppress excessive generalization, we introduce the Sparse Feature Filtering Module (SFFM) that employs bottleneck filters to dynamically and adaptively remove abnormal information from features. Unlike traditional memory modules, it does not need to memorize the normal prototypes across the training dataset. Further, we design the Mixture of Experts (MoE) architecture for SFFM. Each expert is responsible for extracting specialized principal features during running time, and different experts are selectively activated to ensure the diversity of the learned principal features. Second, to overcome the neglect of semantics in existing methods, we integrate a Vision-Language Model (VLM) to generate textual descriptions for video clips, enabling comprehensive joint modeling of semantic, appearance, and motion cues. Additionally, we enforce modality consistency through semantic similarity constraints and motion frame-difference contrastive loss. Extensive experiments on multiple public datasets validate the effectiveness of our multimodal joint modeling framework and sparse feature filtering paradigm.

Method

Overview of SFN-VAD. Firstly, TMFE extracts descriptions from the input clips and encodes and fuses semantic, appearance, and motion features. Secondly, MJAD performs joint decoding of the fused features to predict the next frame and semantic features. SFFM filters abnormal information to increase prediction error when anomalies occur. Finally, anomaly scores are calculated based on frame prediction errors and semantic errors.

Demo

BibTeX

@misc{li2025memoryout,
        title={MemoryOut: Learning Principal Features via Multimodal Sparse Filtering Network for Semi-supervised Video Anomaly Detection}, 
        author={Li, Juntong and Dang, Lingwei and Su, Yukun and Hao, Yun and Xiao, Qingxin and Nie, Yongwei and Wu, Qingyao},
        year={2025},
        eprint={2506.02535},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2506.02535}, 
  }