Abstract
Consider the relative entropy between a posterior density for a parameter given a sample and a second posterior density for the same parameter, based on a different model and a different data set. Then the relative entropy can be minimized over the second sample to get a virtual sample that would make the second posterior as close as possible to the first in an informational sense. If the first posterior is based on a dependent dataset and the second posterior uses an independence model, the effective inferential power of the dependent sample is transferred into the independent sample by the optimization. Examples of this optimization are presented for models with nuisance parameters, finite mixture models, and models for correlated data. Our approach is also used to choose the effective parameter size in a Bayesian hierarchical model.
Original language | English (US) |
---|---|
Pages (from-to) | 4438-4456 |
Number of pages | 19 |
Journal | IEEE Transactions on Information Theory |
Volume | 53 |
Issue number | 12 |
DOIs | |
State | Published - 2007 |
Externally published | Yes |
Keywords
- Asymptotic relative efficiency
- Number of parameters
- Relative entropy
- Sample size
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Library and Information Sciences