عامل بايز
جزء من سلسلة عن |
الإحصاء البايزي |
---|
النظرية |
|
التقنيات |
|
اِستـُخدِم عامل بايز في الإحصاء كبدائل عن اختبار الفرضيات الإحصائي الكلاسيكي.[1][2] مقارنة نموذج بايزي هو وسيلة لاختيار نموذج استنادا إلى عوامل بايز.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
التعريف
The Bayes factor is a likelihood ratio of the marginal likelihood of two competing hypotheses, usually a null and an alternative.[3]
The posterior probability of a model M given data D is given by Bayes' theorem:
The key data-dependent term represents the probability that some data are produced under the assumption of the model M; evaluating it correctly is the key to Bayesian model comparison.
Given a model selection problem in which we have to choose between two models on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors and , is assessed by the Bayes factor K given by
التفسير
A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K:[4]
قالب:Alternating rows table style="text-align: center; margin-left: auto; margin-right: auto; border: none;" ! K !! dHart !! bits !! Strength of evidence |- | < 100 || < 0 || < 0 || Negative (supports M2) |- | 100 to 101/2 || 0 to 5 || 0 to 1.6 || Barely worth mentioning |- | 101/2 to 101 || 5 to 10 || 1.6 to 3.3 || Substantial |- | 101 to 103/2 || 10 to 15 || 3.3 to 5.0 || Strong |- | 103/2 to 102 || 15 to 20 || 5.0 to 6.6 || Very strong |- | > 102 || > 20 || > 6.6 || Decisive |- |}
The second column gives the corresponding weights of evidence in decihartleys (also known as decibans); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.[5]
An alternative table, widely cited, is provided by Kass and Raftery (1995):[6]
قالب:Alternating rows table style="text-align: center; margin-left: auto; margin-right: auto; border: none;" ! log10 K !! K !! Strength of evidence |- | 0 to 1/2 || 1 to 3.2 || Not worth more than a bare mention |- | 1/2 to 1 || 3.2 to 10 || Substantial |- | 1 to 2 || 10 to 100 || Strong |- | > 2 || > 100 || Decisive |- |}
مثال
Suppose we have a random variable that produces either a success or a failure. We want to compare a model M1 where the probability of success is q = ½, and another model M2 where q is unknown and we take a prior distribution for q that is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution:
Thus we have for M1
whereas for M2 we have
The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards M1.
A frequentist hypothesis test of M1 (here considered as a null hypothesis) would have produced a very different result. Such a test says that M1 should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = ½ is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.
A classical likelihood-ratio test would have found the maximum likelihood estimate for q, namely 115⁄200 = 0.575, whence
(rather than averaging over all possible q). That gives a likelihood ratio of 0.1 and points towards M2.
M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors.[7]
On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its AIC value is 2·0 − 2·ln(0.005956) = 10.2467. Model M2 has 1 parameter, and so its AIC value is 2·1 − 2·ln(0.056991) = 7.7297. Hence M1 is about exp((7.7297 − 10.2467)/2) = 0.284 times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded.
انظر أيضاً
- Akaike information criterion
- Approximate Bayesian computation
- Bayesian information criterion
- Deviance information criterion
- Lindley's paradox
- Minimum message length
- Model selection
- Statistical ratios
المراجع
- ^ قالب:استشهاد بدورية محكمة
- ^ قالب:استشهاد بدورية محكمة
- ^ Good, Phillip; Hardin, James (July 23, 2012). Common errors in statistics (and how to avoid them) (4th ed.). Hoboken, New Jersey: John Wiley & Sons, Inc. pp. 129–131. ISBN 978-1118294390.
- ^ Jeffreys, Harold (1998) [1961]. The Theory of Probability (3rd ed.). Oxford, England. p. 432. ISBN 9780191589676.
{{cite book}}
: CS1 maint: location missing publisher (link) - ^ Good, I.J. (1979). "Studies in the History of Probability and Statistics. XXXVII A. M. Turing's statistical work in World War II". Biometrika. 66 (2): 393–396. doi:10.1093/biomet/66.2.393. MR 0548210.
- ^ خطأ استشهاد: وسم
<ref>
غير صحيح؛ لا نص تم توفيره للمراجع المسماةkassraftery1995
- ^ Sharpening Ockham's Razor On a Bayesian Strop
وصلات خارجية
|