Jump to content

Talk:Akaike information criterion

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Fitting statistical models

[edit]

When analyzing data, a general question of both absolute and relative goodness-of-fit of a given model arises. In the general case, we are fitting a model of K parameters to the observed data x_1, x_2 ... x_N. In the case of fitting AR, MA, ARMA, or ARIMA models, the question we are concerned with is what K is, i.e. how many parameters to include in the model.

The parameters are routinely estimated by minimizing the residual sum of squares, or by maximizing log likelihood of the data. For normal distributions, the least sum of squares method and the log likelihood method yield identical results.

These techniques are, however, unusable for estimation of optimal K. For that, we use information criteria which also justify the use of the log likelihood above.

More TBA...=

[edit]

More TBA... Note: The entropy link should be changed to Information entropy

Error in Reference

[edit]

AIC was introduced by Akaike in 1971/1972 in "Informstion theory and an extension of the maximum likelihood principle", not in 1974. Please correct it. — Preceding unsigned comment added by 31.182.64.248 (talkcontribs) 01:05, 23 November 2014

For greater understanding

[edit]

If I understand correctly, the sentence: As an example, suppose that there are three candidate models, whose AIC values are 100, 102, and 110. Then the second model is exp((100 − 102)/2) = 0.368 times as probable as the first model to minimize the information loss.

means that model 2 is exp((102-100)/2) = 2.72 times less probable than model 1. For me, it is much easier to grasp, "2.72 times less" of something than "0.37 times more" of something. Pawel.jamiolkowski (talk) 15:11, 5 December 2024 (UTC)[reply]