MoE-INR: Implicit Neural Representation with Mixture-of-Experts for Time-Varying Volumetric Data Compression
Jun Han -
Kaiyuan Tang -
Chaoli Wang -

Download camera-ready PDF
Room: Hall E2
Keywords
Time-varying data compression, implicit neural representation, volume visualization, mixture-of-experts
Abstract
Implicit neural representations (INRs) have emerged as a transformative paradigm for time-varying volumetric data compression and representation, owing to their ability to model high-dimensional signals effectively. INRs represent scalar fields based on sampled coordinates, typically using either a single network for the entire field or multiple networks across different spatial domains. However, these approaches often face challenges in modeling complex patterns and introducing boundary artifacts. To address these limitations, we propose MoE-INR, an INR architecture based on a mixture-of-experts (MoE) framework. MoE-INR automates irregular subdivisions of spatiotemporal fields and dynamically assigns them to different expert networks. The architecture comprises three key components: a policy network, a shared encoder, and multiple expert decoders. The policy network subdivides the field and determines which expert decoder is responsible for a given input coordinate. The shared encoder extracts hidden representations from the input coordinates, and the expert decoders transform these high-dimensional features into scalar values. This design results in a unified framework accommodating diverse INR types, including conventional, grid-based, and ensemble. We evaluate the effectiveness of MoE-INR on multiple time-varying datasets with varying characteristics. Experimental results demonstrate that MoE-INR significantly outperforms existing non-MoE and MoE-based INRs and traditional lossy compression methods across quantitative and qualitative metrics under various compression ratios.