Nowadays, floating-point temporal-spatial datasets are routinely generated from scientific observational apparatuses or computer simulations at an unprecedented pace. The sheer amount of these large volumetric datasets on the order of terabytes or petabytes consume massive resources in terms of bandwidth, storage and computational power. On the other hand, scientists, equipped with low-end post-analysis machines, often find it impossible to visualize and analyze these massive datasets with such limited resources in hand, not to mention their ultimate goal of real time analysis and visualization. To solve this discrepancy, a compact data representation has to be generated and a trade-off between resource consumption and analytical precision has to be found. There are many existing volumetric representation generating methods, almost all of which adopts some kind of hand-engineered heuristics to extract the effective portion of the datasets. However, the trade-off between resource consumption and analytical quality could not be well established due to the introduction of hand-engineered heuristics. In this paper, we present a deep learning based method that can adaptively capture the inherently complicated dynamics of temporal-spatial volumetric datasets without introducing any hand engineered features. We train an autoencoder based neural network with quantization and adaptation. Compared with existing methods, our method could learn data representation at a much lower compressed/uncompressed rate while preserving the details of original datasets. Also, our method could adapt with different data distribution and conduct compression and decompression in real time. Through extensive experiments, we show the effectiveness and efficiency of our approach over existing methods.