Parameter estimates and model evidence
The two critical results of any computational model are 1) how well did the model fit (model evidence) and 2) what are the estimated parameters of the model. In this section, we cover both.
Parameter estimates¶
We load a builtin dataset and fit the default ReMeta model.
ds = remeta.load_dataset('default')
rem = remeta.ReMeta(optim_type2_gridsearch=False)
rem.fit(ds.stimuli, ds.choices, ds.confidence)To access the results, it is recommended to first invoke the summary() method on the ReMeta instance:
result = rem.summary()The final parameter estimates are accessible via
result.params{'type1_noise': np.float64(0.5095607807701794),
'type1_bias': np.float64(-0.09903165092790608),
'type2_noise': np.float64(0.26434207348657335),
'type2_criteria': array([0.26752641, 0.51154427, 0.75795668])}In addition, each fitting stage (type 1 and type 2) can be accessed separately.
result.type1.params{'type1_noise': np.float64(0.5095607807701794),
'type1_bias': np.float64(-0.09903165092790608)}result.type2.params{'type2_noise': np.float64(0.26434207348657335),
'type2_criteria': array([0.26752641, 0.51154427, 0.75795668])}Uncertainty of parameter estimates¶
For each parameter estimate, ReMeta computes a standard error, reflecting the uncertainty in the parameter estimate:
result.params_se{'type1_noise': np.float64(0.017564176485066366),
'type1_bias': np.float64(0.01897664187387822),
'type2_noise': np.float64(0.031464384754221524),
'type2_criteria': array([0.01401871, 0.01208081, 0.00817252])}The standard error is computed based on the Hessian matrix, which is the second derivative (i.e. curvature) of the log likelihood evaluated at the estimated parameters :
The Hessian is a x matrix with being the number of prameters. By inverting the Hessian, we get the covariance matrix:
The estimated standard error of the -th parameter is the square root of the -th diagonal element of the covariance matrix:
The Hessian of the log-likelihood evaluated at measures the local curvature of the likelihood surface at that point. A steeper curvature indicates that even small deviations from are inconsistent with the observed data and thus higher certainty about our particular parameter estimate. In contrast, a flatter curvature a flatter curvature indicates that the log-likelihood changes only gradually near , meaning that a wider range of parameter values is consistent with the data and we are less certain about our particular parameter estimate.
Subject versus group level¶
When the model fit includes group-level data (with random or fixed effects), a third level in the result object becomes relevant, which separates subject-level and group-level information. For this purpose, we load a built-in dataset with three participants:
ds2 = remeta.load_dataset('group')----------------------------------
..Generative model:
Type 1 noise distribution: normal
Type 2 noise type: report
Type 2 noise distribution: beta_mode
..Generative parameters:
type1_noise: 0.5
type1_bias: -0.1
type2_noise: 0.3
type2_criteria: [0.25 0.5 0.75]
[extra] Criterion bias: 0.0000
[extra] Criterion-based confidence bias: 0.0000
..Descriptive statistics:
No. subjects: 3
No. samples: 1000
Accuracy: 85.6% correct
d': 2.2
Choice bias: -4.1%
Confidence: 0.53
M-Ratio: 0.28
AUROC2: 0.59
----------------------------------
For this dataset, we fit the type1_bias as a random effect:
cfg = remeta.Configuration()
cfg.param_type1_bias.group = 'random'
cfg.optim_type2_gridsearch = Falserem = remeta.ReMeta(cfg)
rem.fit(ds2.stimuli, ds2.choices, ds2.confidence)
result2 = rem.summary()Group-level parameters are first fitted at an individual level, to provide suitable initial values for the group-level estimate. The result object always contains information for both levels, accessible as follows.
result2.subject.params[{'type1_noise': np.float64(0.46208938432786706),
'type1_bias': np.float64(-0.09931352775816989),
'type2_noise': np.float64(0.31585988970367485),
'type2_criteria': array([0.26465114, 0.50571586, 0.76037933])},
{'type1_noise': np.float64(0.4739633340174307),
'type1_bias': np.float64(-0.08224188059838966),
'type2_noise': np.float64(0.27279449366817915),
'type2_criteria': array([0.26281465, 0.48883183, 0.75365125])},
{'type1_noise': np.float64(0.5094581873264677),
'type1_bias': np.float64(-0.13417843272424754),
'type2_noise': np.float64(0.30140098040048996),
'type2_criteria': array([0.22286925, 0.50141797, 0.76257958])}]result2.group.params[{'type1_noise': np.float64(0.4620991504236927),
'type1_bias': np.float64(-0.10397343552490379)},
{'type1_noise': np.float64(0.47447815250071174),
'type1_bias': np.float64(-0.10188457437583853)},
{'type1_noise': np.float64(0.5098555076382547),
'type1_bias': np.float64(-0.10805906886853289)}]In the case, the type 2 level was not fitted at the group level and thus the group-level parameters only include type 1 parameters.
The following works as well:
result2.type1.subject.params[{'type1_noise': np.float64(0.46208938432786706),
'type1_bias': np.float64(-0.09931352775816989)},
{'type1_noise': np.float64(0.4739633340174307),
'type1_bias': np.float64(-0.08224188059838966)},
{'type1_noise': np.float64(0.5094581873264677),
'type1_bias': np.float64(-0.13417843272424754)}]result2.type1.group.params[{'type1_noise': np.float64(0.4620991504236927),
'type1_bias': np.float64(-0.10397343552490379)},
{'type1_noise': np.float64(0.47447815250071174),
'type1_bias': np.float64(-0.10188457437583853)},
{'type1_noise': np.float64(0.5098555076382547),
'type1_bias': np.float64(-0.10805906886853289)}]result2.type2.subject.params[{'type2_noise': np.float64(0.31585988970367485),
'type2_criteria': array([0.26465114, 0.50571586, 0.76037933])},
{'type2_noise': np.float64(0.27279449366817915),
'type2_criteria': array([0.26281465, 0.48883183, 0.75365125])},
{'type2_noise': np.float64(0.30140098040048996),
'type2_criteria': array([0.22286925, 0.50141797, 0.76257958])}]Since the type 2 level did not include group-level parameters, result2.type2.group is empty (i.e., None):
result2.type2.group is NoneTrueModel evidence¶
ReMeta is fundamentally based on a frequentist framework and uses maximum likelihood estimation to estimate parameters.
Likelihood at the type 1 stage is the probability of decisions given the stimuli and the type 1 parameters . Maximum likelihood estimation tries to find the set of paratemers that maximize the likelihood under the sampling distribution :
Likelihood at the type 2 stage is the probability of reported confidence ratings given type 1 decision values and the type 2 parameters .
Likelihoods are computed for the type 1 and type 2 stage separately and are represented as such in the result object:
print(f'Type 1 log likelihood (overall): {result.type1.loglik:.3f}')
print(f'Type 1 log likelihood (per sample): {result.type1.loglik_per_sample:.3f}')
print(f'Type 2 log likelihood (overall): {result.type2.loglik:.3f}')
print(f'Type 2 log likelihood (per sample): {result.type2.loglik_per_sample:.3f}')
Type 1 log likelihood (overall): -717.841
Type 1 log likelihood (per sample): -0.359
Type 2 log likelihood (overall): -3425.602
Type 2 log likelihood (per sample): -1.713
Higher likelihoods indicate better fits. If different studies are compared, it is best to assess the likelihood per sample.
In Addition, ReMeta reports the Akaike information criterion (AIC) and the Bayesian information criterion (BIC):
where is the number of parameters and is the likelihood for the estimated parameters .
where is the number of samples.
print(f'Type 1 AIC (overall): {result.type1.aic:.2f}')
print(f'Type 1 AIC (per sample): {result.type1.aic_per_sample:.4f}')
print(f'Type 2 AIC (overall): {result.type2.aic:.2f}')
print(f'Type 2 AIC (per sample): {result.type2.aic_per_sample:.4f}\n')
print(f'Type 1 BIC (overall): {result.type1.bic:.2f}')
print(f'Type 1 BIC (per sample): {result.type1.bic_per_sample:.4f}')
print(f'Type 2 BIC (overall): {result.type2.bic:.2f}')
print(f'Type 2 BIC (per sample): {result.type2.bic_per_sample:.4f}')Type 1 AIC (overall): 1439.68
Type 1 AIC (per sample): 0.7198
Type 2 AIC (overall): 6855.20
Type 2 AIC (per sample): 3.4276
Type 1 BIC (overall): 1450.88
Type 1 BIC (per sample): 0.7254
Type 2 BIC (overall): 6866.41
Type 2 BIC (per sample): 3.4332
Here, lower AIC and BIC values indicate better fits.
If a group-level model was fitted to a stage, the result objects contains both the model evidence of the subject-level fit and the group-level fit for each:
for s in range(result2.nsubjects):
print(f'[subject {s}] Subject-level log likelihood: {result2.type1.subject.loglik[s]:.2f}')[subject 0] Subject-level log likelihood: -326.13
[subject 1] Subject-level log likelihood: -333.80
[subject 2] Subject-level log likelihood: -361.38
for s in range(result2.nsubjects):
print(f'[subject {s}] Group-level log likelihood: {result2.type1.group.loglik[s]:.2f}')[subject 0] Group-level log likelihood: -326.14
[subject 1] Group-level log likelihood: -334.08
[subject 2] Group-level log likelihood: -361.86
Note that the group level log likelihood is expected to be higher than the subject-level log likelihood. It is precisely the goal of group-level fits to avoid overfitting to individual subjects.