More on Bayesian Split testing

Hi All

This with this post I am expanding on the previous post on split testings. In particular, the last post assumed that the test groups had distinct and unknown means and variances. While that case is interesting it might not be useful.   The problem is that it tests if the mean or variance are different between the groups; life is too short to care about differences in variance.

Here I give a table of evidence ratios for three types of tests. First where the variance of the two groups is know. Second where the variance of the two groups are unknown but assumed to be the same. And third where the variances are unknown and allowed to be different (the previously presented case).

Before jumping into that it is useful (though maybe not interesting? look you chose to read this) to review the paradigm of Bayesian evidence as used in split testing.

Bayesian evidence is the probability of the observations marginalizing over all parameters. To justify that claim consider Bayes formula as commonly used in statistical inference:

bf2

Here \pi({\bf m}_{j}) is the prior distribution of {\bf m}_{j}, \mathcal{L}({\bf m}_{j}) is the likelihood
of the parameter vector (i.e., the probability of the data given the model) and \mathcal{Z}_{j} is the Bayesian evidence. The sub script j denotes the choice of parameterization. The posterior distribution P({\bf m}_{j} | {\bf d}), is a distribution! That might seem obvious however it has a significant consequence; distributions must sum / integrate to 1. Thus \int d{\bf m}_{j}  P({\bf m}_{j} | {\bf d}) = 1. Which in turn means that \mathcal{Z}_{j} = \int d{\bf m}_{j}  \pi({\bf m}_{j}) \mathcal{L}({\bf m}_{j}).

Returning to Bayes theorem:

Screen Shot 2018-04-15 at 2.35.21 PM

If we relabel A, B as the model {\bf m}_{j} and data {\bf d} then

Screen Shot 2018-04-15 at 2.33.42 PM

i.e., P( {\bf d}) =   \mathcal{Z}_{j} = \int d{\bf m}_{j}  \pi({\bf m}_{j}) \mathcal{L}({\bf m}_{j}). The probability of the data is the integral over all parameters of the prior and likelihood.

The the evidence ratio of two groups of sizes N and M with unknown means and the same known variance to one group with known variance is:

Screen Shot 2018-04-15 at 2.51.48 PM

The the evidence ratio of two groups of sizes N and M with unknown means and the same unknown variance to one group with unknown variance is:

Screen Shot 2018-04-15 at 3.03.12 PM

The the evidence ratio of two groups of sizes N and M with unknown means and different unknown variances to one group with unknown variance is:

Screen Shot 2018-04-16 at 9.15.33 PM

 

A few final notes these equations look ugly; but actually they are very easy to interpret. If the ratio is greater than 1 then the data (observations) are more likely to have come from the more complex model and if they are less than 1 they are more likely to have come from the simpler model. None of the straw man garbage of the t test.  Also expanding these for multiple groups does not fall into the p-val problem of multiple comparisons.  Finally you probably want to compute the numerators and denominators in logs and the values get really big / small for any reasonably sized data set.

 

 

Advertisement