TY - GEN
T1 - Optimal division of dataset into three subsets for Artificial Neural Network models
AU - Sahoo, Goloka Behari
AU - Ray, Chittaranjan
PY - 2007
Y1 - 2007
N2 - The generalization ability of Artificial Neural Networks (ANNs) is significantly undermined if the datasets presented to ANN for training do not contain sufficient information in all dimensions of the modeling domain. Under practical circumstances, it is not possible to get a large dataset for ANN use. This paper presents a systematic approach that answers the questions "which samples" and "how many samples" should be selected for the datasets required by ANN when available data are limited. The predictive ability of ANN largely depends on the network's structure and internal parameter. Although some guidance is available in the literature for the choice of geometry and internal parameters, the solution space lies in large range and thus their combination is so large. This paper presents the use of micro genetic algorithms (μGA) to develop a μGA-ANN model to search for the optimal combination of ANN geometry and internal parameters.
AB - The generalization ability of Artificial Neural Networks (ANNs) is significantly undermined if the datasets presented to ANN for training do not contain sufficient information in all dimensions of the modeling domain. Under practical circumstances, it is not possible to get a large dataset for ANN use. This paper presents a systematic approach that answers the questions "which samples" and "how many samples" should be selected for the datasets required by ANN when available data are limited. The predictive ability of ANN largely depends on the network's structure and internal parameter. Although some guidance is available in the literature for the choice of geometry and internal parameters, the solution space lies in large range and thus their combination is so large. This paper presents the use of micro genetic algorithms (μGA) to develop a μGA-ANN model to search for the optimal combination of ANN geometry and internal parameters.
KW - Back propagation neural network
KW - Data division
KW - Micro genetic algorithms
KW - Optimization network geometry
KW - Radial basis function network
KW - Self-organizing map
UR - http://www.scopus.com/inward/record.url?scp=84872063279&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84872063279&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84872063279
SN - 9780972741224
T3 - Proceedings of the 3rd Indian International Conference on Artificial Intelligence, IICAI 2007
SP - 859
EP - 872
BT - Proceedings of the 3rd Indian International Conference on Artificial Intelligence, IICAI 2007
T2 - 3rd Indian International Conference on Artificial Intelligence, IICAI 2007
Y2 - 17 December 2007 through 19 December 2007
ER -