To use all functions of this page, please activate cookies in your browser.
my.chemeurope.com
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
 My watch list
 My saved searches
 My saved topics
 My newsletter
Gibbs samplingIn mathematics and physics, Gibbs sampling is an algorithm to generate a sequence of samples from the joint probability distribution of two or more random variables. The purpose of such a sequence is to approximate the joint distribution (i.e. to generate a histogram), or to compute an integral (such as an expected value). Gibbs sampling is a special case of the MetropolisHastings algorithm, and thus an example of a Markov chain Monte Carlo algorithm. The algorithm is named after the physicist J. W. Gibbs, in reference to an analogy between the sampling algorithm and statistical physics. The algorithm was devised by Geman and Geman (citation below), some eight decades after the passing of Gibbs, and is also called the Gibbs sampler. Gibbs sampling is applicable when the joint distribution is not known explicitly, but the conditional distribution of each variable is known. The Gibbs sampling algorithm is to generate an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown (see, for example, Gelman et al.) that the sequence of samples comprises a Markov chain, and the stationary distribution of that Markov chain is just the soughtafter joint distribution. Gibbs sampling is particularly welladapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of conditional distributions. BUGS (link below) is a program for carrying out Gibbs sampling on Bayesian networks. Additional recommended knowledge
BackgroundGibbs sampling is a special case of MetropolisHastings sampling but the value is always accepted (). The point of Gibbs sampling is that given a multivariate distribution it is simpler to sample from a conditional distribution rather than integrating over a joint distribution. Suppose we want to sample values of from a joint distribution . We begin with a value of and sample by . Once that value of is calculated, repeat by sampling for the next : . ImplementationSuppose that a sample is taken from a distribution depending on a parameter vector of length , with prior distribution . It may be that is very large and that numerical integration to find the marginal densities of the would be computationally expensive. Then an alternative method of calculating the marginal densities is to create a Markov chain on the space by repeating these two steps:
These steps define a reversible markov chain with the desired invariant distribution . This can be proven as follows. Define x˜_{j}y if for all and let denote the probability of a jump from to . Then, for x˜_{j}y the transition probabilities are and otherwise. So so the detailed balance equations are satisfied showing that the chain is reversible, and that it has invariant distribution . In practice, the suffix is not chosen at random, and the chain cycles through the suffixes in order. In general this gives a nonreversible chain, but it will still have the desired invariant distribution (as long as the chain can access all states under the fixed ordering). Failure ModesThere are two ways that Gibbs sampling can fail. The first is when there are islands of highprobability states, with no paths between them. For example, consider a probability distribution over 2bit vectors, where the vectors (0,0) and (1,1) each have probability 1/2, but the other two vectors (0,1) and (1,0) have probability zero. Gibbs sampling will become trapped in one of the two highprobability vectors, and will never reach the other one. More generally, for any distribution over highdimensional, realvalued vectors, if 2 particular elements of the vector are perfectly correlated (or perfectly anticorreleated), those 2 elements will become stuck, and Gibbs sampling will never be able to change them. The second problem can happen even when all states have nonzero probability and there is only a single island of highprobability states. For example, consider a probability distribution over 100bit vectors, where the allzeros vector occurs with probability 1/2, and all other vectors are equally probable, and so have a probability of each. If you want to know the probability of the zero vector, it would be sufficient to take 100 or 1000 samples from the true distribution. That would very likely give an answer very close to 1/2. But you would probably have to take more than 2^{100} samples from Gibbs sampling to get the same result. No computer could do this in a lifetime. This problem occurs no matter how long the burn in period is. This is because in the true distribution, the zero vector occurs half the time, and those occurances are randomly mixed in with the nonzero vectors. Even a small sample will see both zero and nonzero vectors. But Gibbs sampling will alternate between returning only the zero vector for long periods (about 2^{99} in a row), then only nonzero vectors for long periods (about 2^{99} in a row). So in the long term it has the correct distribution of giving the zero vector with probability 1/2. But for a typical sample size that can be obtained in less than a century, the zero vector will either appear with probability 0 or probability 1 in the sample. So in this case, Gibbs sampling works in theory but fails in practice. See also
References


This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Gibbs_sampling". A list of authors is available in Wikipedia. 