Information conversion, effective samples, and parameter size

Xiaodong Lin, Jennifer Pittman, Bertrand Clarke

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Consider the relative entropy between a posterior density for a parameter given a sample and a second posterior density for the same parameter, based on a different model and a different data set. Then the relative entropy can be minimized over the second sample to get a virtual sample that would make the second posterior as close as possible to the first in an informational sense. If the first posterior is based on a dependent dataset and the second posterior uses an independence model, the effective inferential power of the dependent sample is transferred into the independent sample by the optimization. Examples of this optimization are presented for models with nuisance parameters, finite mixture models, and models for correlated data. Our approach is also used to choose the effective parameter size in a Bayesian hierarchical model.

Original languageEnglish (US)
Pages (from-to)4438-4456
Number of pages19
JournalIEEE Transactions on Information Theory
Volume53
Issue number12
DOIs
StatePublished - 2007
Externally publishedYes

Keywords

  • Asymptotic relative efficiency
  • Number of parameters
  • Relative entropy
  • Sample size

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Fingerprint

Dive into the research topics of 'Information conversion, effective samples, and parameter size'. Together they form a unique fingerprint.

Cite this