Advanced measurement techniques such as genomics are capable of acquiring high-throughput data in high dimensions, enabling new scientific discoveries, and offering unique insights in biomedical research. However, biological measurements can be easily affected by systematic variations especially when those measures are obtained from distinct batches involving different platforms and experimental conditions. Such batch effect is usually larger than biological signal of interest and can cause invalid downstream analysis and false discovery if not properly handled. Here we proposed a new learning approach based on multivariate distribution matching in the latent space for batch effect removal while preserving signals of most interest. This new data-driven approach consists of three key components: an autoencoder trained to encode the data into low-dimension neurons that represent data pattern; a similarity measurement procedure to identify batch-effect associated neurons; and a residual network-based matching framework to transform the affected neurons' distribution from one batch to another where the adjusted neurons will be decoded to reconstruct new datasets with batch effect removed. The effectiveness of the proposed approach has been validated in several ways using public genomic data on Alzheimer disease. This new method provides a highly promising tool for complex batch-effect adjustment and outperforms other commonly used methods.