Volumes of high-throughput assays been made publicly available. These massive repositories of biological data provide a wealth of information that can harnessed to investigate pressing questions regarding aging and disease. However, there is a distinct imbalance between available data generation techniques and data analysis methodology development. Similar to the four “V’s” of big data, biological data has volume, velocity, heterogeneity, and is prone to error, and as a result methods for analysis of this “biomedical big data” have developed at a slower rate. One promising solution to this multi-dimensional issue are network models, which have emerged as effective tools for analysis as they are capable of representing biological relationships en masse. Here we examine the need for development of standards and workflows in the usage of the correlation network model, where nodes and edges represent correlation between expression pattern in genes. One structure identified as biologically relevant in a correlation network, the gateway node, represents genes that change in co-expression between two different states. In this research, we manipulate parameters used to identify the gateway nodes within a given dataset to determine the consistency of results among network building and clustering approaches. This proof-of-concept is extremely important to investigate as there is a growing pool of methods used for various steps in our network analysis workflow, causing a lack of robustness, consistency, and reproducibility. This research compares the original gateway nodes analysis approach with manipulation in (1) network creation and (2) clustering analysis to test the consistency of structural results in the correlation network. To truly be able to trust these approaches, it must be addressed that even minor changes in approach can have sweeping effects on results. The results of this study allow the authors to call for stronger studies in benchmarking and reproducibility in biomedical “big” data analyses.