Abstract
The notion of prediction error has established itself at the heart of formal models of animal learning and current hypotheses of dopamine function. Several interpretations of prediction error have been offered, including the model-free reinforcement learning method known as temporal difference learning (TD), and the important Rescorla-Wagner (RW) learning rule. Here, we present a model-based adaptation of these ideas that provides a good account of empirical data pertaining to dopamine neuron firing patterns and associative learning paradigms such as latent inhibition, Kamin blocking and overshadowing. Our departure from model-free reinforcement learning also offers: 1) a parsimonious distinction between tonic and phasic dopamine functions; 2) a potential generalization of the role of phasic dopamine from valence-dependent "reward" processing to valence-independent "salience" processing; 3) an explanation for the selectivity of certain dopamine manipulations on motivation for distal rewards; and 4) a plausible link between formal notions of prediction error and accounts of disturbances of thought in schizophrenia (in which dopamine dysfunction is strongly implicated). The model distinguishes itself from existing accounts by offering novel predictions pertaining to the firing of dopamine neurons in various untested behavioral scenarios.
Original language | English (US) |
---|---|
Pages (from-to) | 61-84 |
Number of pages | 24 |
Journal | Network: Computation in Neural Systems |
Volume | 17 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2006 |
Externally published | Yes |
Keywords
- Associative learning
- Blocking
- Dopamine
- Incentive salience
- Latent inhibition
- Motivated behavior
- Overshadowing
- Prediction error
- Psychosis
- Reinforcement learning
- Rescorla-Wagner learning rule
- Schizophrenia
- Temporal difference algorithm
ASJC Scopus subject areas
- Neuroscience (miscellaneous)