Human online inference in the presence of temporal structure 

Get Complete Project Material File(s) Now! »

HI and HD signals in the real world

In order to make appropriate decisions in relation to their environment, humans and animals must infer the state of the surrounding world on the basis of the sensory signals they receive. If these signals are noisy and if the environment is dynamic, their inference task can be difficult as a new incoming signal may reflect either noise or a change in the underlying state. However, if events in the world present some kind of temporal structure, such as in our HD signal, it is possible to use that structure to refine one’s inference. Conversely, if events follow a Poisson process, as in the HI signal, their occurrences present no particular temporal structure, and what just happened conveys no information on what is likely to happen next. Hence, there is a fundamental difference between the HI and HD conditions, which impacts the inference of an optimal observer. Many natural events are not Poisson-distributed in time, and exhibit strong regularities.
References [75, 76, 77, 78] have recorded the motor activity of both rodents and human subjects over the course of several days. In both species, they found that the time mintervals between motion events were distributed as a power law, a distribution characterized by a long tail, leading to bursts, or clusters, of events followed by long waiting epochs. The durations of motion episodes also exhibited heavy tails. These kinds of distribution are incompatible with Poisson processes, which yield exponentially distributed inter-event epochs. Moreover, both rodent and human activity exhibited long-range correlations, another feature that cannot be explained by a Poisson process, nor even by an inhomogeneous Poisson process (i.e., a process with a time-varying rate). A particular form of autocorrelation is periodicity, which occurs in a wide range of phenomena. In the context of human motor behavior, we note that walking is a highly rhythmical activity [79, 80]. Circadian and seasonal cycles are other examples of periodicity. More complex patterns exist (neither clustered nor periodic), such as in human speech which presents a variety of strong temporal structures, whether at the level of syllables, stresses, or pauses [81, 82, 83]. In all these examples, natural mechanisms produce series of temporally structured events. The ubiquity of history-dependent statistics of events in nature begs for explorations of inference mechanisms in their presence.

Behavioral and neural responses in the presence of different temporal statistics

In the case of studies of perception and decision making, in both humans and animals, history-dependent signals have been used widely. In a number of experiments [84, 85, 86, 46, 47], a first event (a sensory cue, or a motor action such as a lever press) is followed by a second event, such as the delivery of a reward, or a ‘go’ signal triggering the next behavior. The time elapsed between these two events – the ‘reward delay’ or the ‘waiting time’ – is randomized and sampled from distributions that, depending on the studies, vary in mean, variance, or shape. For instance, in both Refs. [84] and [85], unimodal and bimodal temporal distributions are used. Because of the stochasticity of this waiting time, the probability of occurence of the second event varies with time, similarly to the probability of a change point in our HD condition; these studies explore whether variations of this probability are captured by human and animal subjects. In Ref. [84], recordings from the V4 cortical area in rhesus monkey indicate that, for both unimodal and bimodal waiting times distributions, the attentional modulation of sensory neurons varies consistently with this event probability. In Ref. [85], the reaction times of macaques are inversely related to the event probability, for both unimodal and bimodal distributions, and the activity of neurons in the lateral intraparietal (LIP) area is correlated to the evolution of this probability over time. Reference [86] manipulates another attribute of the distribution of reward delays: between blocks of trials, the standard deviation of this distribution is changed, while the mean is left unchanged. Mice, in this situation, are shown to adapt their waiting times to this variability of reward delays, consistently with a probabilistic inference model of reward timing.

Impact of an erroneous belief on the temporal statistics of the signal

Given both the similarities and the discrepancies between the behavior of subjects and the output of the optimal Bayesian model, we seek alterations of the model that can account for human behavior. An obvious weak spot of the optimal model is the assumption of a perfectly faithful representation of the set of parameters governing the statistics of the signal presented to the subject. Although subjects were exposed in training phases to blocks of stimuli in which the state, st, was made explicitly visible, they may have learnt the generative parameters incorrectly. We explore this possibility, and, here, we focus on the change probability, q( ), which governs the dynamics of the state. (We found that altering the value of this parameter had an appreciable impact, as compared to alterning the values of other parameters; results for other parameters not shown.) In the HD case, q( ) is a sigmoid function characterized by two parameters: its slope, = 1, which characterizes ‘how suddenly’ change points become likely, as a function of ; and the average duration of inter-change-points intervals, T = 10. In the HI case, q = 0:1 is constant; it can also be interpreted as an extreme case of a sigmoid in which = 0 and T = 1=q = 10 (Fig. 3A). We implement a suboptimal model, referred to as IncorrectQ, in which these two quantities, and T, are treated as free parameters, and we explore how varying these parameters impacts behavior, as compared to the OptimalInference model.

READ  LINKING SOCIAL CAPITAL AND KNOWLEDGE MANAGEMENT

Models with limited memory and variability in the inference step or the response-selection step

In the two models we introduce now, stochasticity stems from the inference step (first model) or the selection step (second model). We call these models Sample and Sampling, respectively. The Sample model is a stochastic version of the MaxProb model: instead of retaining the N most likely run-lengths at each time step, the Sample model samples N run-lengths from the marginal distribution of the runlengths, pt( jx1:t). By contrast, in the Sampling model, the inference step is carried out optimally, and in particular the full Bayesian posterior is computed; but the selection step is stochastic, and hence suboptimal. Specifically, the estimation (response) is obtained by sampling the posterior. (Later, we shall consider hybrid models, in which sampling in the selection step will be combined with also a suboptimal inference step. In this section, the Sampling model is suboptimal only in its selection step.) Sample model Similarly to the MaxProb model, the Sample model is characterized by an approximate joint posterior, ~pt(s; jx1:t), and is parametrized by the integer N , the maximum number of run-lengths with non-vanishing probabilities according to the approximate posterior. The parameter N reflects, here again, the ‘memory capacity’ of the model. At each time step, upon the presentation of a new signal, xt+1, a posterior distribution, pt+1(s; jx1:t+1), is computed according to the Bayesian update equation (Eq. (2.2)), and the marginal distribution on the run-length, pt+1( jx1:t+1), is derived from it. If fewer than N + 1 run-lengths have a non-vanishing probability, then we set ~pt+1(s; jx1:t+1) = pt+1(s; jx1:t+1). If N + 1 run-lengths have a non-vanishing probability, then, rather than eliminating the most unlikely run-length (as in the MaxProb model), an unlikely run-length, , is chosen randomly. Specifically, is sampled from the distribution [1 􀀀 pt+1( jx1:t+1)] =zt+1, where zt+1 is a normalization factor. Finally, the approximate posterior at time t+1, ~pt+1(s; jx1:t+1), is obtained analogously to that in the MaxProb model (Eq. (2.9)).

Table of contents :

Introduction
Expected utility hypothesis
Bayesian probabilities
Bayesian models of human inference
Online inference
Questions
Outline
1 Human online inference in the presence of temporal structure 
Results
Behavioral task, and history-independent vs. history-dependent stimuli
Learning rates adapt to the temporal statistics in the stimulus
The variability in subjects’ responses varies over the course of inference
A simple, approximate Bayesian model
Human repetition propensity
Discussion
HI and HD signals in the real world
Behavioral and neural responses in the presence of different temporal
statistics
Inference with HI and HD change points
Methods
Details of the behavioral task
Subjects
Details of the signal
Training runs
Empirical run-length
Details of approximate model
Model self-consistency
Statistical tests
Supplementary tables: statistical tests
Supplementary figure: data analysis excluding all occurrences of repetitions
2 Cognitive models of human online inference 
Results
Behavioral inference task
Optimal estimation: Bayesian update and maximization of expected reward
The optimal model captures qualitative trends in learning rate and repetition
propensity
Impact of an erroneous belief on the temporal statistics of the signal
Impact of limited run-length memory
Does behavioral variability depend on the inference process?
Models with limited memory and variability in the inference step or the
response-selection step
Stochastic model with sampling in time and in state space: the particle
filter
Fitting models to experimental data favors sample-based inference
Discussion
Online Bayesian inference
Holding an incorrect belief on temporal statistics
Sampling versus noisy maximization
Stochastic inference and particle filters
Robustness of model fitting
Sample-based representations of probability
Methods
Bayesian update equation
Nodes model
Particle Filter
Model Fit
3 Sequential effects in the online inference of a Bernoulli parameter 
Sequential effects
A framework of Bayesian inference under constraint
Predictability-cost models
The Bernoulli case
Inferring conditional probabilities
Representation-cost models
The Bernoulli observer
The conditional-probabilities observer
Discussion
Preliminary experimental results
Leaky integration
Variational approach to approximate Bayesian inference
Representation cost and the theory of rational inattention
Bibliography 
4 Summary in French 

GET THE COMPLETE PROJECT

Related Posts