My watch list
my.chemeurope.com  
Login  

Binary entropy function



 

In information theory, the binary entropy function, denoted H(p) \, or H_{\mathrm b}(p) \,, is defined as the entropy of a Bernoulli trial with probability of success p. Mathematically, the Bernoulli trial is modelled as a random variable X that can take on only two values: 0 and 1. The event X = 1 is considered a success and the event X = 0 is considered a failure. (These two events are mutually exclusive and exhaustive.)

If Pr(X = 1) = p, then Pr(X = 0) = 1 − p and the entropy of X is given by

H(X) = H_{\mathrm b}(p) = -p \log p - (1 - p) \log (1 - p). \,

The logarithms in this formula are usually taken (as shown in the graph) to the base 2. See binary logarithm.

When p=\frac 1 2 , the binary entropy function attains its maximum value. This is the case of the unbiased bit, the most common unit of information entropy.

H(p) is distinguished from the entropy function by its taking a single scalar constant parameter. For tutorial purposes, in which the reader may not distinguish the appropriate function by its argument, H2(p) is often used; however, this could confuse this function with the analogous function related to Rényi entropy, so Hb(p) (with "b" not in italics) should be used to dispel ambiguity.

Derivative

The derivative of the binary entropy function may be expressed as the negative of the logit function:

{d \over dp} H_{\mathrm b}(p) = - \operatorname{logit}(p) = -\log\left( \frac{p}{1-p} \right). \,

See also

References

  • David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
 
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Binary_entropy_function". A list of authors is available in Wikipedia.
Your browser is not current. Microsoft Internet Explorer 6.0 does not support some functions on Chemie.DE