Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / Bayes Factors 9 Appealing idea for comparing models: 1

Bayes Factors 9 Appealing idea for comparing models: 1

Math

Bayes Factors 9 Appealing idea for comparing models: 1. Assign models prior probabilities. 2. Use Bayes’ rule to compute posterior probabilities. 3. Choose model(s) with highest posterior probability. Can we compare in a way that isn’t too dependent on prior probabilities? Want a pure measure of strength of evidence from the data. 10 Two Models For data y, consider two different models: H1 and H2 Assign them prior probabilities p(H1 ) > 0 p(H2 ) > 0 Bayes’ rule formally gives posterior probabilities p(H1 | y) = p(H1 ) p(y | H1 ) c p(H2 | y) = p(H2 ) p(y | H2 ) c where normalizing factor c is the same in both cases (depends only on y). 11 Follows that p(H2 | y) p(H1 | y) = p(H2 ) p(y | H2 ) × p(H1 ) p(y | H1 ) The final factor is the Bayes factor in favor of H2 versus H1 : BF(H2 ; H1 ) = p(y | H2 ) p(y | H1 ) Also, p(H2 | y) = posterior odds p(H1 | y) p(H2 ) = prior odds p(H1 ) in favor of H2 . 12 So BF(H2 ; H1 ) = posterior odds favoring H2 prior odds favoring H2 Represents how much the odds of H2 relative to H1 change after seeing data y: I BF(H2 ; H1 ) ≈ 1 indicates data do not distinguish well between them. I BF(H2 ; H1 ) 1 indicates strong support for H2 over H1 . Bayes factor represents strength of evidence coming from the data. A possible interpretive scale (Kass & Raftery, 1995): BF(H2 ; H1 ) data evidence for H2 vs. H1 1 to 3 Barely mentionable 3 to 20 Positive 20 to 150 Strong > 150 Very Strong 14 Simple Case: No Parameters Suppose H1 and H2 each fully specify a distribution for y, without needing any parameters (or priors). Then p(y | Hm ) is the likelihood (m = 1, 2), and BF(H2 ; H1 ) = p(y | H2 ) = likelihood ratio p(y | H1 ) This is a classical (non-Bayesian) measure of evidence in favor of H2 over H1 . 15 Example H2 = you have rare disease H1 = you don’t y = 1 if you test positive, 0 if not Supposing ( p(y | H2 ) = ( 0.99, y = 1 0.01, y = 0 p(y | H1 ) = 0.05, y = 1 0.95, y = 0 then, if you test positive (y = 1) BF(H2 ; H1 ) = 0.99 ≈ 20 0.05 representing (almost) strong evidence from the data in favor of disease. 16 Even without reference to prior probabilities (e.g., background rates), a positive test greatly increases your odds of having the disease. Nonetheless, if your prior probability of having the disease is sufficiently small, your posterior probability will also be small. 17 General Case If model Hm has sampling (data) distributions specified according to parameter θm , with densities p(y | θm , Hm ) then p(y | Hm ) = marginal data density under model Hm Z = p(y | θm , Hm ) p(θm | Hm ) dθm where p(θm | Hm ) is the prior for model Hm . Thus, Bayes factor depends on priors chosen for H1 and for H2 . 18 Note: Both priors must be proper – otherwise marginal densities will depend on arbitrary scaling factors, and the Bayes factor will be undefined. Application Consider model with parameter θ and two disjoint sub-models H1 : θ ∈ Θ1 H2 : θ ∈ Θ2 each of positive prior probability. Then the Bayes factor favoring H2 is Pr(θ ∈ Θ2 | y) Pr(θ ∈ Θ1 | y) , Pr(θ ∈ Θ2 ) Pr(θ ∈ Θ1 ) which can be used instead of a classical hypothesis test. 20 Drawbacks In the general case, Bayes factors I Can be sensitive to aspects of models that shouldn’t matter. I Can give paradoxical results, especially when parameter spaces have different dimensions (BDA3, Sec. 7.4). 21

Option 1

Low Cost Option
Download this past answer in few clicks

16.89 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE