## Abstract

Some expected features of sample patterns in a classification system may be missing, immeasurable, or obscured. When a sample pattern with such an incomplete set of features is presented to a classifier, how reliable is the decision thus made by the classifier? What is the effect of those absent features to the accuracy and correctness of the classification? How to evaluate, or estimate the probability of error of such a decision, compared to that made on the use of more features or a supposedly complete set of features? We derive the solutions of above problems from a point of view of Bayes classifier. The traditional decision theory is extended to the classifications with incomplete feature sets. It shows that in case of classifying incomplete patterns, the Bayes classifier constructed on the marginal distributions of presented features also minimizes the probability of error. We then use Bhattacharyya bound to obtain an analytic form for estimating the error probabilities of classifications made on the partially available features, with respect to that using more or a complete set of features. Directions are given to decision-making on problems of in which situation the effect of missing or obscured features can be neglected, or must be seriously considered. Experimental results have shown consistence with the theoretical analysis.

Original language | English (US) |
---|---|

Pages (from-to) | 1281-1290 |

Number of pages | 10 |

Journal | Pattern Recognition |

Volume | 23 |

Issue number | 11 |

DOIs | |

State | Published - 1990 |

Externally published | Yes |

## Keywords

- Bayes classifier
- Bayes decision rule
- Bhattacharyya bound
- Class set
- Feature vectors
- Incomplete patterns
- Pattern classification
- Probability of error
- Samples

## ASJC Scopus subject areas

- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence