Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation
Jianxin Sun - University of Nebraska-Lincoln, Lincoln, United States
David Lenz - Argonne National Laboratory, Lemont, United States
Hongfeng Yu - University of Nebraska-Lincoln, Lincoln, United States
Tom Peterka - Argonne National Laboratory, Lemont, United States
Screen-reader Accessible PDF
Download preprint PDF
Room: Bayshore II
2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Fast forward
Abstract
Functional approximation as a high-order continuous representation provides a more accurate value and gradient query compared to the traditional discrete volume representation. Volume visualization directly rendered from functional approximation generates high-quality rendering results without high-order artifacts caused by trilinear interpolations. However, querying an encoded functional approximation is computationally expensive, especially when the input dataset is large, making functional approximation impractical for interactive visualization. In this paper, we proposed a novel functional approximation multi-resolution representation, Adaptive-FAM, which is lightweight and fast to query. We also design a GPU-accelerated out-of-core multi-resolution volume visualization framework that directly utilizes the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.